Connect with us


It took Amplify 3 hours to stop the California oil spill



DRiller Amplify Energy Corp. It took more than three hours to stop California’s largest oil spill in nearly three decades, according to a government report.

Following a low pressure alarm around 2:30 am on October 20th. 2 of its San Pedro Bay pipeline, Amplify Beta’s offshore division did not shut down the pipeline until 6:01 a.m., according to the U.S. Department of Transportation’s Office of Pipeline and Hazardous Material Safety. said in a corrective action order on Tuesday.

The company did not respond to messages asking for comment.

While all the details are not known, “the facts that are being presented are of great concern to us,” said Bill Karam, chief executive of the Pipeline Safety Trust, a group that interacts with pipeline operators. Operational reasons could lead to a sudden drop in pressure in the pipeline, but “we expected the pipeline to close the line in a lot less than three hours” and also report it faster, he said.

Amplify estimates that the largest oil spill in the Golden State since the 1994 earthquake caused a pipeline to rupture, spilling up to 3,000 barrels along the California coast. Oil has shifted south, forcing the closure of popular surf beaches and polluted wetlands.

Divers found a 13-inch crack in the pipeline that was “likely source of the oil leak,” U.S. Coast Guard spokeswoman Rebecca Ore said at a briefing on Tuesday. About 4,000 feet of pipeline was “shifted 105 feet laterally.”

Amplify CEO Martin Wilshere said at a briefing that the footage shows the pipeline “strung like a bowstring.” He added that this type of bias is “not common”.

He said his company did not confirm the leak until about 8 a.m. that same day, when an oil sheen was found in nearby waters.

“Regardless of the cause, we will do our best to fix it,” Wilshere said.

According to the report, after the pipeline shutdown, Beta did not report the incident to the National Response Center for another three hours. Initial estimates indicated that the pipeline produced about 700 barrels, far less than the company’s amount.

According to the document, Beta Offshore was ordered to review and evaluate the effectiveness of its emergency response plan, including “on-site response and support, coordination, notification and communication with emergency services and government officials.” PHMSA did not specify whether the time taken to close the line and notify the National Response Center was too slow or adequate.

An email and call to PHMSA regarding the appropriateness of a three hour wait was not returned immediately.

The root cause of the accident remains unconfirmed, but “preliminary reports indicate that the accident may have been caused by an anchor that caught on the pipeline, causing a partial rupture,” the order says. There is no confirmation of the ship that caused the leak.

– Assisted by Mike Jeffers.

More must-read content from TIME

connect with us


Air, Tetris and BlackBerry nostalgic for 1980s capitalism



Hello Quartz Members!

If loose costumes didn’t give it away, the soundtracks did.

To the cinema Tetris: The boys from the pet shopOpportunities (let’s make a lot of money) and a remix of a Bonnie Tyler songwaiting for a hero“. IN Blackberry: Elastica’s “link” with the lines “Riding any wave / It’s the luck you crave.” And in air, most straightforward: Dire Straits’Money for nothing.

These songs anchor these 2023 movies in the time period: 1980s, transitioning into the early 1990s. But they also function as hymns to what the films celebrate. IN tetrisa video game manager travels to Soviet Russia to acquire the rights to wildly popular video game. IN air, Nike boss manages to sign Michael Jordan to support very popular shoe line. IN Blackberryphone company executives design wildly popular mobile phone. IN hot fieryanother movie in this mini-genre, Frito-Lay janitor-turned-CEO invents headline, wildly popular Cheetos snack. Hooray for the frenzied popularity of consumer goods.

These are interesting films. All of them are a hymn to entrepreneurial ingenuity, which May be somewhat dramatic, though not always. But these are also loosely fictional process stories without any obvious cinematic appeal. IN air, the Nike executive (played by Matt Damon) makes a lot of phone calls. IN tetristhere is a lot of chatter about the types of copyright available for video games. hot fiery about the idea for a more spicy corn layer flavor. In all of them, middle managers make their rich companies even richer. The acting is warm. This is not what you need for exciting presentations in an elevator.

But the films were made, and are made now, which speaks of their attitude to the present, which is actually much more interesting than the dialogues in the films themselves.

1. Cynical approach

The products in these films are also brands, and brands have supporters. “I feel a little dirty when I say this, but it’s about brand awareness” – John S. Baird, Director tetris said Los Angeles Times:

Ultimately, you want as many people as possible to see your film. And I think that when you make a movie about an existing product, you automatically have a built-in audience, and that helps in marketing and helps in raising audience awareness. This piques a little interest in what they might see next, whether it’s on streaming or in the movies.

Another “cynical answer” Matt Johnson saiddirector Blackberrything is [intellectual property] the chain of names of all these products is much easier to imagine for money people, and since everything is related to intellectual property – you either base a movie on an article, or a book, or something else – of course, people from [agencies like] CAA, William Morris and others said, “Well, wait, what about groceries?” And so I think this is just the beginning of what is likely to be a flood if there is market interest.” In other words: expect films about the light bulb moment behind an acid-washed Levi’s, a musical about the invention of the Sony Walkman, and a saga about how a hero brought Cabbage Patch Kids to market.


All four films are written in the tone of nostalgia – but nostalgia for what exactly? Maybe for a while avant-garde China, when Western companies were comfortably running the world economy? And when the most dynamic businessmen had a distinctly American zeal—even if they were Canadians, as in Blackberryor act on behalf of a Japanese company, as in tetris?

IN air, Damon has to give clumsy inspirational speech: Two minutes to sell Jordan his advertising contract, during which the music swells and the eyes get wet. Jordan’s story, as Damon’s character says, “is an American story, and so Americans will love it. People will develop you… Because when you’re great and new, we love you… You’ll change the damn world.” Is he still talking about Jordan? He might as well be referring to Nike before they ran into problems with fake shoes and labor rights allegations. Or an American corporation around the 1980s and 1990s, when the US was winning the Cold War and when China’s meteoric rise was just around the corner.


Maybe the nostalgia they’re chasing this was the heyday of a special type of capitalism, when companies made things that we could eat, wear or play with. (American production, in particular, suffered after the era depicted in these films; between 2000 and 2010. USA lost a third jobs in manufacturing.) The following decades were dominated by other ways to make money: the financialization of the economy and the creation of ubiquitous software. In both cases, the nuts and bolts elude the non-professional moviegoer. Although Cheetos? This is what we get.

Or was it really nostalgia for the last years of belief in corporate capitalism? His victory in the Cold War should have strengthened this belief. Instead, as the environment deteriorated, corporate greed grew, and inequality increased in the US and elsewhere, the luster faded. The main actors of capitalism today are seen as snooty and heartless. In films, Horatio Alger’s characters are portrayed as courageous and passionate people, and their work as useful and good. If capitalism ever needed puff things, now it has them.

🎧 Sideshow podcast

image: Vicki Leta

But back to the future for a moment. It’s 2023, in case you haven’t heard it yet, and among everything we think we should have figured out by now, two stand out in particular:

🧴Disposable plastic: Often it is the consumer’s responsibility to figure out what to do with a plastic product, but what can manufacturers do – and what should they be told to make a real dent in this problem?

🗳️ Online voting: Voting from your own device would theoretically increase the number of participants, but it is much more difficult than it seems. Will we get there in our lifetime?

Listen to the latest two episodes of the Quartz Obsession podcast –Now!

✅ Subscribe wherever you get your podcasts: Apple Podcasts | Spotify | Google | stapler | YouTube

4. 💲 THING

In a crime film, we protect the detective; in a sports film, an outsider. But these films are about people who want to make more money.

The idolatry of those who have become rich is not, in itself, as unusual as a Hollywood cliché. But the specific mechanics of it here is what happens in the middle rungs of a corporation, like a cog in a machine. doing his duty-different from entrepreneurial dramas such as workplaces, Juliaor minx.

Apart from V economic nostalgia works well, movie studios cashing in on these jagged heroes because they must think that this is who we want. The fault, dear reader, is not in our movie stars, but in ourselves..

Thanks for reading! And don’t be shy lend a hand with comments, questions, or topics you want to learn more about.

have a nice holidays

Samant Subramanian, Global News Editor

Continue Reading


Congress really wants to regulate AI, but no one seems to know how



In February 2019, OpenAI, a little-known artificial intelligence company, announced that its large language model text generator, GPT-2, would not be released to the public “due to our concerns about malicious applications of this technology.” The company said that among the dangers was the possibility of misleading news articles, online impersonation, and automating the production of offensive or fake social media content, as well as spam and phishing content. As a consequence, Open AI suggested that “governments should consider expanding or launching initiatives to more frequently monitor the social impact and diffusion of AI technologies, and to measure progress in the capabilities of such systems.”

This week, four years after the warning, members of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law met for a discussion on AI Oversight: Rules for Artificial Intelligence. As with other tech hearings on the Hill, this took place after a new technology that could revolutionize our social and political lives was already in circulation. Like many Americans, lawmakers became concerned about the pitfalls of the AI ​​model in March when OpenAI released GPT-4, the latest and most advanced version of its text generator. At the same time, the company added it to a chatbot it launched in November that used GPT to answer questions in a conversational manner with a confidence that isn’t always warranted because GPT tends to make things up.

Despite this volatility, in just two months ChatGPT became the fastest growing consumer app in history, reaching one hundred million monthly users by the start of this year. It has over a billion page hits every month. OpenAI has also released DALL-E, an image generator that creates original images from a descriptive word prompt. Like GPT, DALL-E and other text-to-image tools can blur the line between reality and fiction, making us more susceptible to deception. Recently, the Republican Party released the first fully created by AI attack advertising; it shows what appears to be a genuinely dystopian portrayal of the second term of the Biden administration.

Three experts took part in the Senate hearing: Sam Altman, CEO of OpenAI; Christina Montgomery, IBM Director of Privacy and Trust; and Gary Marcus, NYU professor emeritus and AI entrepreneur. But Altman attracted the most attention. He was the head of the company with the most popular tech product that could revolutionize how business is done, how students learn, how art is made, and how humans and machines interact—and what he told senators: “OpenAI believes AI regulation is essential.” “. He aims, he wrote in his prepared testimony, “to help policy makers as they determine how to facilitate regulation that balances security incentives while ensuring that people can access the benefits of technology.”

Senator Dick Durbin of Illinois called the hearing “historic” because he couldn’t recall executives going before lawmakers and “begging” them to regulate their products, but it wasn’t actually the first time a tech CEO had sat in the congressional hearing room and called for more regulation. In particular, in 2018, in the aftermath of the Cambridge Analytica scandal — when Facebook gave a Trump-linked political consulting firm access to the personal information of nearly ninety million users without their knowledge — Facebook CEO Mark Zuckerberg said. some of the same senators that he was open to more government oversight, a position he reiterated the following year, writing in Washington mail“I believe we need a more active role for governments and regulators.” (At the same time, Facebook was paying lobbyists millions of dollars a year to stave off government regulation.)

Like Zuckerberg, Altman began his call for more regulation by explaining the barriers his company is already using, such as training its models to reject certain kinds of “anti-social” requests — like the one I recently asked ChatGPT when I asked it to write the code. for 3D printing Glock. (However, they wrote a script for a 3D printed slingshot. “I would like to emphasize that the creation and use of this device must be done responsibly and legally,” the message says before issuing the code. ) OpenAI usage policies also ban people from using their models to create malware, create child sexual abuse images, plagiarize or produce political campaign material, among other things, although it’s not clear how the company plans to enforce them. “If we find that your product or its use does not comply with these policies, we may ask you to make the necessary changes,” the policy says, effectively assuming that in many cases OpenAI will act after the violation has occurred rather than prevent his. This.

In an opening statement at the hearing, the chairman of the subcommittee, Senator Richard Blumenthal of Connecticut, was merciless. “Artificial intelligence companies should be required to test their systems, disclose known risks, and grant access to independent researchers,” he said. He added: “When AI companies and their clients cause harm, they must be held accountable.” To demonstrate his view of the harm, Blumenthal presented his remarks with a recording of him talking about the need for regulation, but these were words he never uttered. Both “his” voice and “his” statement were fabricated by artificial intelligence. The consequences, especially for the politicians in the room, were frightening.

Figuring out how to assess damages or determine liability can be as tricky as figuring out how to regulate technology that moves so fast that it inadvertently breaks everything in its path. Altman, in his testimony, suggested Congress’s idea. the creation of a new government agency tasked with licensing what he called “powerful” AI models (although it’s not clear how that word would be defined in practice). While this may not sound like a good idea at first glance, it has the potential to be self-serving. As Clem Delang, CEO of AI startup Hugging Face, said, tweeted, “License Requirement for Model Training…. . further concentration of power in the hands of a few. In the case of OpenAI, which was able to develop its large language models without government oversight or other regulatory restrictions, this would allow the company to significantly outperform its competitors and solidify its “first to publish” position while limiting new opportunities. people on the field.

If that were to happen, it would not only give companies like OpenAI and Microsoft (which uses GPT-4 in a number of their products, including the Bing search engine) an economic advantage, but would further undermine the free flow of information and ideas. Gary Marcus, AI professor and entrepreneur, told senators that “there is a real risk of a kind of technocracy coupled with an oligarchy, with a small number of companies influencing people’s beliefs” and “doing this with data we don’t even know” . know about.” He was referring to the fact that OpenAI and other companies have kept the data on which their large language models were trained secret, making it impossible to determine their inherent biases or truly assess their security.

Senator Josh Hawley of Missouri noted that the most immediate danger of LLMs such as ChatGPT lies in their ability to manipulate voters. “This is one of the areas that worries me the most,” Altman told him. “The more general ability of these models to manipulate, to persuade, to provide one-on-one interactive disinformation,” he said, “with elections coming up next year and these models getting better, I think this is a major problem.”

The most viable way to fix this problem would be for OpenAI to take the lead and take their LLMs off the market until they no longer have the ability to manipulate voters, spread misinformation, or otherwise undermine the democratic process. It would indeed be, in the words of Senator Durbin, “historic.” But it was not proposed in the hearing room. Instead, much of the discussion has focused on what kind of regulatory agency, if any, could be created and who should serve in it, and whether such an agency could be made international. It was an exciting exercise in the future that ignored the real danger. Senator Blumenthal told his colleagues, “Now Congress has a choice. We had the same choice when faced with social media. We failed to take advantage of this moment.” With elections approaching and this technology in the game, the predictive power of artificial intelligence is not needed to admit that lawmakers, despite their curiosity and bipartisan civility, missed this moment too. ♦

Continue Reading


50 years ago: Skylab launch



Yesterday, May 14, marks the 50th anniversary of the launch of the first American space station, Skylab, which launched from the Kennedy Space Center in Florida on May 14, 1973. Saturn V launch vehicles. The three crewed missions spent a total of 171 days aboard Skylab, performing hundreds of experiments. The last crew left in 1974, leaving Skylab in a parking orbit that was declining faster than originally thought, leading to global news in 1979 when NASA announced the station’s imminent return but couldn’t say exactly where it might land. On July 11, 1979, NASA engineers launched Skylab boosters, targeting the Indian Ocean with partial success, but a few large chunks did make landfall in Western Australia.

Continue Reading