Connect with us

TECH

Underwater pendulums can calm waves and reduce coastal erosion

Published

on

Wave countermeasure device could limit erosion along coastline

Shutterstock / Xander

A system of inverted pendulums tethered to the seafloor can greatly reduce the size of waves, helping to limit beach erosion.

Beach-breaking waves are often stopped by stone walls built parallel to the shore, but these large structures are intrusive, difficult to regulate or move, and can trap muddy water or disturb marine habitats. Paolo Pesutto in the Italian National Research Council, the Institute of Marine Sciences and its colleagues built a prototype …

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

TECH

The Intel Core i9-12900K is recognized as the “Best Gaming Processor in the World”.

Published

on

information launched its first six Alder Lake-S processors on Wednesday night. The SKUs that form the initial roster of 12th Gen Intel Core processors are detailed in the table below. In short, these are Intel Core i9-12900K/KF, Intel Core i7-12700K/KF and Intel Core i5-12600K/KF. By suffixes, seasoned bit technologists will know that these are all unlocked enthusiast processors, and the “KF” chips sold with iGPUs have been processed without communication with the outside world.

The highlights of the launch of the first Intel 7 product were mostly what we were missing after Intel Architecture Day in August: specific SKU release information, a small amount of testing, and some input from Intel partners regarding the first 600 series. motherboards and DDR5 RAM kits. We have all of that now, but Intel is not allowing third-party reviews to be posted until CPUs and PCs go public starting November 4th.

Above you can see all the new SKUs. You can see a combination of P and E cores, base and turbo frequencies, as well as caches and so on. Important new data in this table includes base processor power and maximum power in Turbo mode. You can disregard the lower power as all enthusiast motherboards will run the new processors with a higher power limit shown to maximize their potential.

Considering that the top-end Intel Core i9-12900K processor unleashes the potential of these 12th generation Core processors, Intel claims that this chip is “the best gaming processor in the world.” It has 16 cores and 24 threads and runs at up to 5.2GHz, but in your high end cooler it will draw over 240W under heavy loads.

At last night’s event, Intel showed some results from its own benchmarks, demonstrating a “gaming lead”. You can see the chart he shared right above that suggests the Intel i9-12900K is about 12 percent faster than the AMD Ryzen 9 5950X in modern PC gaming. Unfortunately, it appears that Intel systems running Windows 11 have not been updated with AMD/Microsoft patches in order for AMD L3 caching and “preferred kernel technology” to work properly. Be aware that these issues can slow down the game on AMD platforms by as much as 15%.

Intel has been better at claiming achievements in content creation than its own previous-generation flagship processors. In these CPU-intensive cases, 12th Gen processors were at least a third faster than their 11th Gen ancestors. Of course, more cores and creative multi-core applications have played an important role in these rises.

For your interest, I have embedded a comparison shot of the Intel i9-12900K die below as shared by Der8auer. You can also check out some fancy plates and pictures courtesy of HardwareLuxx editor Andreas Schilling. on twitter.

The flagship Core i9-12900K is rumored to be $590, the Core i7-12700K is $410 and the Core i5-12600K is $290.

Along with the new processors, Intel has introduced brand new 600 series chipsets. In terms of enthusiast-level desktop processors, partners such as Asus, Gigabyte and MSI have successfully released motherboards based on the Z690 chipset. The new motherboards provide advanced and modern I/O technologies such as PCIe 5.0, DDR5 (typically), fast integrated wired and wireless networks and more as standard.

Many new motherboards have been discovered, for example, only Asus has nineteen Z690 models out-of-the-box, covering all kinds of features and sizes (E-ATX, ATX, mATX and mini-ITX). Similar aorAnd G.Skill are also showing off their new DDR5 memory modules.

Continue Reading

TECH

Rival factions of Architectural Intelligence, from Elon Musk to OpenAI

Published

on

A Quick Guide to Deciphering the Weird But Powerful AI Subcultures of Silicon Valley

Chart with cut-out images of the following people, in order from slowest and dystopian to most can't stop and utopia: Elisier Yudkowsky, Tristan Harris, Elon Musk, Timnit Gebru, Sundar Pichai, Satya Nadella and Sam Altman
(From left to right) Eliezer Yudkowsky, Tristan Harris, Elon Musk, Timnit Gebru, Sundar Pichai, Satya Nadella and Sam Altman. (Illustration by Elena Lacey/The Washington Post; photo by The Washington Post; Getty Images; Twitter)

Bitter divisions are growing within the Silicon Valley AI sector over the impact of the new wave of AI, with some arguing that it is necessary to move forward, others that the technology poses an existential risk.

Those tensions came to the fore late last month when Elon Musk, along with other tech executives and scientists, signed an open letter calling for a six-month pause in the development of “human-competitive” AI, citing “profound risks to society and humanity.” Self-proclaimed decision theorist Eliezer Yudkowsky, co-founder of the nonprofit Machine Intelligence Research Institute (MIRI), went even further: AI development must be stopped worldwide, he wrote in a Time magazine article calling for US airstrikes on foreign nations. data centers as needed.

The political world did not seem to know how seriously these warnings should be heeded. Asked if AI is dangerous, President Biden said on Tuesday: “That remains to be seen. May be.”

Many in the insular AI sector of Silicon Valley are familiar with the dystopian visions where a small group of strange but powerful subcultures have clashed in recent months. One sect believes that AI can kill us all, another argues that this technology will allow humanity to flourish if used correctly. Others are suggesting a six-month pause suggested by Musk, who will reportedly launch his own artificial intelligence laboratorywas designed to help him catch up.

Sub-groups can be quite fluid, even if they seem to be contradictory, and insiders sometimes disagree on the basic definitions.

But these once-marginal worldviews can become key to discussions about AI. Here is a quick guide to deciphering the ideologies (and financial incentives) behind the facts:

Argument: The phrase “AI safety” used to refer to practical issues, like making sure self-driving cars don’t crash. In recent years, the term, which is sometimes used interchangeably with “AI compliance”, has also been adopted to describe a new area of ​​research aimed at making AI systems obey the intentions of their programmers and prevent AI’s quest for power that can only harm humans. in order to avoid shutdown.

Many have ties to communities such as effective altruism, a philosophical movement to maximize the good in the world. EA famously started by prioritizing causes like global poverty, but focused on concerns about the risk associated with advanced AI. Online forums like Lesswrong.com or the AI ​​Alignment Forum have heated debates about these issues.

Some adherents also have a philosophy called longevity, which aims to maximize the good over millions of years. They cite a thought experiment from Nick Bostrom’s Superintelligence that suggests that safe superhuman AI could allow humanity to colonize the stars and create trillions of future humans. Creating a safe artificial intelligence is critical to keeping these possible lives safe.

Who is behind this: In recent years, EA-affiliated donors such as Open Philanthropy, a foundation founded by Facebook co-founder Dustin Moskowitz and former hedge fund Holden Karnofsky, have helped create a number of centers, research labs, and community-building efforts focused on AI and AI security. alignment. FTX Future Fund, founded by crypto chief executive Sam Bankman-Freed, was another big player until the firm went bankrupt after Bankman-Freed and other executives were accused of fraud.

What influence do they have?: Some work in leading AI labs such as OpenAI, DeepMind, and Anthropic, where this mindset has led to some useful ways to make AI safer for users.. A close-knit network of organizations is conducting research and research that can be disseminated more widely, including 2022 survey it showed that 10 percent of machine learning researchers say AI could end humanity.

AI Impacts, which conducted the study, received support from four different EA-affiliated organizations, including the Future of Life Institute, which hosted Musk’s open letter and received the largest donation from Musk. Center for Humane Technology co-founder Tristan Harris, who once campaigned on the dangers of social media and has now focused on AI, cited the study extensively.

Argument: It’s not that this group doesn’t care about security. They are just very excited about creating software that achieves artificial general intelligence, or AGI, is a term for an AI that is as smart and capable as a human. Some of them are promising tools like GPT-4, which OpenAI says have developed skills like writing and answering in foreign languages ​​without any instruction, which means they are on their way to AGI. Experts explain that GPT-4 has developed these capabilities by ingesting huge amounts of data. most say that these tools do not have a human understanding of the meaning of the text.

Who is behind this?: Two leading artificial intelligence labs mentioned the creation of AAI in their mission statements: OpenAI, founded in 2015, and DeepMind, a research lab founded in 2010 and acquired by Google in 2014. wealthy technology investors interested in the outer limits of AI. According to Cade Metz’s The Genius Makers, Peter Thiel donated $1.6 million to Yudkowsky’s non-profit artificial intelligence organization, and Yudkowsky introduced Thiel to DeepMind. Musk invested in DeepMind and introduced the company to Google co-founder Larry Page. Musk introduced the concept of AGI to other OpenAI co-founders such as CEO Sam Altman.

What influence do they have?: OpenAI’s dominance of the market has thrown open the Overton window. The CEOs of the world’s most valuable companies, including Microsoft CEO Satya Nadella and Google CEO Sundar Pichai, are now being asked and discussed AGI in interview. Bill Gates blog about it. “Because the benefits of AGI are so great, we don’t think it’s possible or desirable for society to permanently stop its development,” Altman. wrote in February.

Argument: Although doomers share a range of beliefs and often visit the same online forums as people in the AI ​​security world, this crowd came to a conclusion that if you connect a powerful enough AI, it will destroy human life.

Who is behind this?: Yudkowsky led the warning voice of this doomsday scenario. He is also the author of the popular Harry Potter and the Methods of Rationality fanfiction series, which is a starting point for many young people in these online fields and ideas related to artificial intelligence.

His non-profit organization MIRI received support $1.6 million in donations in its early years from technology investor Thiel, who has since distanced himself from the group’s views. EA-affiliated organization Open Philanthropy donated about $14.8 million under five grants from 2016 to 2020. More recently, MIRI has received funds from crypto nouveau riches, including Ethereum co-founder Vitalik Buterin.

What influence do they have?: Although some in this world consider Yudkowsky’s theories prophetic, his writings have also been criticized for not being applicable to modern machine learning. However, his views on AI have influenced more vocal voices on these topics, such as renowned computer scientist Stuart Russell signing an open letter.

In recent months, Altman and other raised the authority of Yudkovsky. Altman recently tweeted that “maybe at some point [Yudkowsky] deserves the Nobel Peace Prize” for accelerating AGI, and later also tweeted a photo of the two of them at a party hosted by OpenAI.

Argument: For years, ethicists have been warning about problems with larger AI models, including results that are biased by race and gender, the explosion of synthetic media that could damage the information ecosystem, and the impact of AI that sounds deceptively human. . Many argue that the apocalypse narrative exaggerates the capabilities of AI. helping companies advance technology as part of science fiction fantasy.

Some in this camp argue that the technology is not inevitable and can be created without harming vulnerable communities. Technology-focused criticism can ignore human decisions, allowing companies to avoid liability for poor medical advice or privacy violations by their models.

Who is behind this?: co-authors visionary research work warning about the dangers of large language models, including Timnit Gebru, former co-lead of Google’s Ethical AI Group and founder of the Distributed AI Research Institute, are often cited as leading voices. Critical research demonstrating the failures of this type of AI, as well as ways to mitigate the problems, is “often conducted by scientists of color — many of them black women” and underfunded by junior scientists, researchers Ababa Birhain and Deborah Raji. wrote in an article for Wired in December.

What influence do they have?: In the midst of the AI ​​boom, tech firms like Microsoft, Twitch and Twitter are laying off their AI ethics teams. Bootmakers and the public listened to politics.

Former White House politician Suresh Venkatasubramanian, who helped draft the AI ​​Bill of Rights, said VentureBeat claims that the recent exaggerated claims about ChatGPT’s capabilities were part of an “organized fear-mongering campaign” around generative AI that distracted from stopping work on real AI problems. Gebru spoke before European Parliament about the need for a slow movement of AI, slowing down the pace of development of the industry, so that the safety of society is in the first place.

Continue Reading

TECH

Richard Branson’s rocket company Virgin Orbit files for Chapter 11 bankruptcy: NPR

Published

on


Cosmic Girl, a modified Boeing 747 used by Virgin Orbit, in November 2022 in Newquay, England.

Hugh Hastings/Getty Images


hide title

toggle signature

Hugh Hastings/Getty Images


Cosmic Girl, a modified Boeing 747 used by Virgin Orbit, in November 2022 in Newquay, England.

Hugh Hastings/Getty Images

The company founded by billionaire Richard Branson, which modified Boeing 747s to launch satellites into orbit, filed for bankruptcy after a high-profile rocket launch failure in January.

On Monday, Virgin Orbit announced that seeking Chapter 11 bankruptcy protection as he tries to find a buyer for a company in financial difficulty.

“While we have made great efforts to improve our financial position and obtain additional funding, in the end we must do what is best for the business,” said Dan Hart, CEO of Virgin Orbit.

“We believe the cutting-edge launch technology created by this team will have a broad appeal to buyers as we continue to sell the company,” he added.

Branson founded Virgin Orbit in 2017 as a subsidiary of Virgin Galactic, his space tourism venture.

Virgin Orbit bills itself as a launch service for the “fast-growing industry” of small satellites, and Hart said the firm has successfully placed 33 satellites into orbit.

But the company also signaled that it was in financial trouble.

On March 15, Virgin Orbit announced this. suspension of operations save money and look for potential funding sources.

Then, about two weeks later, the company said that reduction of about 675 employees – or Approximately 85% of the entire workforce – because “significant funding” could not be secured. Company officials estimated that they would have to pay about $15 million in severance pay and other costs related to job cuts.

The bankruptcy filing came almost three months after Virgin Orbit failed to launch a rocket into orbit from the United Kingdom.

This highly publicized event was to be the first orbital space launch from British soil: a modified Boeing 747 called Cosmic Girl would take to the air to an altitude of 35,000 feet and then launch a rocket attached underneath. his wing called LauncherOne goes into space.

Virgin Orbit later reported that knocked out fuel filter The rocket likely failed, causing LauncherOne and its payload of nine satellites to fall to Earth.

This week, Virgin Orbit said it received a $31.6 million commitment as a “debtor in possession” from Branson’s Virgin Investments Limited, which will allow it to continue operations as it works on a possible sale.

Continue Reading
Advertisement

Trending