Connect with us

TECH

HP wins major fraud case against Autonomy founder and CEO Mike Lynch

Published

on

Increase / Mike Lynch, former CEO of Autonomy Corp., leaves an extradition hearing at Westminster Magistrates’ Court in London on Tuesday 9 February 2021.

Bloomberg | Getty Images

After years of wrangling, HP won a civil fraud case against Autonomy founder and CEO Mike Lynch. V managerThe largest civil fraud trial in UK history began just hours before the UK Home Secretary approved Lynch’s extradition to the United States, where he faces new fraud charges.

The UK High Court ruled that HP was “significantly successful” in proving that Autonomy executives fraudulently increased the firm’s reported earnings, profits and value. Back in 2011, HP paid $11 billion for the company and later announced an $8.8 billion writedown. In court, HP sought $5 billion in damages, but the judge said the total amount due would be “significantly less” and would be announced at a later date. Kelvin Nicholls, Lynch’s attorney and partner at Clifford Chance law firm, said his client intends to appeal the High Court’s decision. In a later statement, Nicholls said his client is also appealing the extradition order to the UK High Court.

This week’s events are the latest twist in an extradition process that began in November 2019 when the US Embassy in London filed a request that Lynch be tried in the United States on 17 counts, including wire fraud, conspiracy and securities fraud. Lynch denies all charges against him. Nicholas Ryder, a professor of financial crime at the University of the West of England, describes it as “a .45 Colt for the US Department of Justice” – a comprehensive and powerful move. “This is their main charge. The implications for Mr. Lynch are significant.”

At the time of the Autonomy acquisition, the then HP chairman said he was “seriously chickened out” about the deal, according to statements subsequently filed in court. The company said that some former members of Autonomy’s management team “used accounting irregularities, misrepresentations, and disclosure denials to inflate the company’s fundamental financials.” [Autonomy]”. Among them was Lynch, then CEO of the firm.

In 2015, HP filed a UK lawsuit against Lynch, alleging that he was involved in publishing false reports that exaggerated the value of Autonomy’s business. Now, more than a decade after the ink on the deal has dried and nearly seven years since Lynch went to trial, the UK civil case is being complicated by a parallel case involving the US Department of Justice, the consequences of which could be huge for Lynch. In a related lawsuit, his former Autonomy colleague, Chief Financial Officer Sushowan Hussain, was found guilty of fraud in a US court in May 2019, sentenced to five years in prison and fined $4 million, and was also asked to forfeit another $6.1 million.

In July 2021, a London court ruled that Lynch could be extradited, with the judge saying that the findings of the UK civil case would have “very limited relevance” to the US case. Patel has since put off signing an extradition request for a man currently on trial in the UK for similar crimes. But now this case is drawing to a close, and Lynch may run out of options. “He could face a significant prison sentence if found guilty of 17 counts of fraud,” Ryder says of the US criminal charges against Lynch.

The case highlights the curiosity of parallel, bidirectional litigation. “We have a situation where a British citizen who is in the UK is accused of fraud against a US company,” says Thomas Cathey, a solicitor at British immigration law firm Gherson, who followed the Lynch case. This American company used the British courts to file a civil suit. However, the US Department of Justice subsequently wants to bring criminal charges against Lynch in the United States. “There are a lot of factors at play here,” says Cathy, who worked for a previous law firm on the case of Scottish hacker Gary McKinnon, who successfully delayed extradition to the United States thanks to the intervention of the then Home Secretary. Theresa May.

Lynch finds himself embroiled in a transatlantic spat that lawyers have described as unprecedented. Patel found herself in a difficult position: by signing the extradition document, she seemed to confirm that legal proceedings in the US take precedence over the case in the UK. Her decision is also another reminder of the perceived imbalance in the UK-US extradition deal. Ultimately, US prosecutors could use the terms of the 2003 extradition treaty signed between the US and the UK, which allows the US to extradite British citizens for alleged crimes under US law, even if those crimes were allegedly committed in the UK, but not vice versa. . However, not everyone agrees with this point of view. “The story of perceived imbalance often comes from outside the courtroom,” says Richard Cannon, partner at Stokoe Partnership Solicitors, who specializes in criminal and civil defense. “In my experience, courts very rarely can or will consider this.”

Despite this, the case is being closely watched due to the astonishing numbers and its implications for the UK tech and business community. The concern is that the Lynch case could set a precedent for the primacy of one legal system over another. “I think the US has more aggressive powers to take cases like this against people who are not in the UK,” Cathy says. “I think it’s just a general sense of injustice,” he adds.

This story originally appeared on wired.com.

TECH

The Importance of Asset Management for Improving ROI

Published

on

In the past two years alone, businesses across industries have spent more than $2 trillion on digital transformation. From the cloud (will open in a new tab) With the transition to IoT (Internet of Things) and the ability to connect to third parties, enterprises are looking to increase their digital presence and improve operational efficiency.

However, this digital approach is rapidly expanding an organization’s IT resources—so much so that businesses often lose track of how many IT assets are running on their networks. These invisible, unaccounted for and uncontrolled assets are often a backdoor for critical cyberattacks.

Continue Reading

TECH

Big tech companies are already lobbying to relax European AI rules

Published

on

EEuropean lawmakers are putting the finishing touches on a set of broad rules designed to govern the use of artificial intelligence that, if passed, would make the EU the first major jurisdiction outside of China to adopt targeted regulation of AI. This has made the forthcoming law the subject of fierce debate and lobbying, with opposing sides fighting to have its scope either expanded or narrowed.

Legislators are close to agreeing on a draft law, Financial Times informed last week. After that, the law will move on to negotiations between the member states of the bloc and the executive branch.

The EU AI law is likely to ban controversial AI uses such as social scoring and facial recognition in public places, and force companies to declare whether copyrighted material is being used to train their AI.

The rules could set a global bar for how companies build and deploy their AI systems, as it may be easier for companies to comply with EU rules globally than to build different products for different regions—a phenomenon known as the “Brussels effect.”

“The EU AI law will definitely set the regulatory tone: what does comprehensive AI regulation look like?” says Amba Kak, executive director of the AI ​​Now Institute, a policy research group based at New York University.

One of the Act’s most contentious points is whether the so-called “general purpose AI” – on which ChatGPT is based – should be considered high risk and thus subject to the strictest rules and penalties for misuse. On one side of the debate are big tech companies and a conservative bloc of politicians who have argued that defining general purpose AI as “high risk” will stifle innovation. On the other, a group of progressive politicians and technologists who argue that excluding powerful general-purpose artificial intelligence systems from the new rules would be akin to accepting social media regulation that does not apply to Facebook or TikTok.

Read more: Artificial intelligence from A to Z

Those who call for the regulation of general purpose AI models argue that only the developers of general purpose AI systems have a realistic understanding of how these models learn and therefore the bias and harm that can result. They say the big tech companies behind AI — the only ones that can change how these general-purpose systems are built — will be exonerated if the burden of AI security is shifted to smaller companies downstream.

in the open letter published earlier this month, more than 50 AI institutions and experts spoke out against the removal of general purpose AI from EU rules. “Considering [general purpose AI] how low risk frees the companies at the heart of the AI ​​industry to make critical choices about how these models are built, how they work, and who they work for during development and calibration. ,” says Meredith Whittaker, president of the Signal Foundation and signer of the letter. “This will free them from scrutiny, even if these general purpose AIs are the core of their business model.”

Big tech companies like Google and Microsoft, which have invested billions of dollars in AI, are opposing these proposals, according to a report by the Corporate Europe Observatory. Lobbyists argue that only when general-purpose AI is applied to “high-risk” use cases — often by smaller companies using them to build more niche applications — does it become dangerous, the Observatory says. report states.

“General purpose AI systems are not target dependent: they are generic in design and do not pose a high risk in and of themselves, as these systems are not designed for any specific target,” Google claims in a document that was sent to the commissioners’ offices. EU in the summer of 2022, which the Corporate Europe Observatory received on freedom of information requests and made public last week. According to Google, classifying general purpose artificial intelligence systems as “high risk” could harm consumers and hinder innovation in Europe.

Microsoft, OpenAI’s largest investor, has made similar arguments through industry groups of which it is a member. “There is no need for the AI ​​Act to have a dedicated section on GPAI. [general purpose AI]”, industry group letter signed by Microsoft in 2022. “GPAI software vendors cannot exhaustively guess and anticipate the AI ​​solutions that will be built from their software.” Microsoft has also lobbied for the EU’s AI Act, which “unduely burdens innovation”, through The Software Alliance, an industry lobbying group it founded in 1998. arguesmust be “assigned to a user who may expose general purpose AI to exploitation risk”. [case]”, and not the developer of the general-purpose system itself.

A Microsoft spokesperson declined to comment. Google representatives did not respond to requests for comment in time for publication.

Read more: The AI ​​arms race is changing everything

The EU AI law was first drafted in 2021, at a time when AI was mostly narrow tools applied to narrow use cases. But over the past two years, major tech companies have begun to successfully develop and launch powerful “general-purpose” artificial intelligence systems that can perform innocuous tasks like writing poetry while at the same time capable of much riskier behavior. (Think of OpenAI’s GPT-4 or Google’s LaMDA.) Under the prevailing business model that has since emerged, these large companies license their powerful general-purpose AI to other businesses, who often tailor it to specific tasks and make it available to the public through application or interface.

Read more: The new Bing with artificial intelligence threatens users. It’s not funny

Some argue that the EU has put itself in a stalemate by structuring the AI ​​Law in an outdated way. “The main issue here is that the whole way they structured the EU law many years ago was to have risk categories for different uses of AI,” says Helen Toner, OpenAI board member and director of strategy at Georgetown. university. Center for Security and New Technologies. “The problem they’re having right now is that large language models – general purpose models – don’t have a built-in use case. It’s a big shift in how AI works.”

“Once these models are trained, they are not trained to do anything in particular,” says Toner. “Even the people who make them don’t really know what they can and can’t do. I expect it will probably be years before we really know everything GPT-4 can and can’t do. This is very difficult for a piece of legislation that is built around classifying AI systems according to levels of risk based on their use case.”

More must-read content from TIME


Write Billy Perrigo: billy.perrigo@time.com.

Continue Reading

TECH

Chromebook outflow raises questions about its sustainability

Published

on

V latest report from the US Educational Foundation Public Interest Research Group (PIRG) argues that the Chromebook’s durability and sustainability are questionable. Despite the lower price, the Chromebook has the potential to be more expensive.

Doubling the life of a Chromebook could result in $1.8 billion in savings for taxpayers.PIRG

With the rise of the pandemic in 2020, most classes moved online, prompting school districts to purchase these low-cost laptops that students could take home with them. However, after just three years, these Chromebooks began to break down, and this churn points to their poor lifespan.

What’s more, they’re said to lack maintainability, making the devices less stable than their high-end competitors. The lack of maintainability further creates opportunities for large amounts of e-waste to be generated.

What hinders the stability factor

Compared to Windows laptops, Chromebooks are less adaptive to updates. In addition, spare parts are not available in most cases. With this in mind, parts such as hinges, keyboards, and screens that are sensitive to drops, bumps, and liquid spills can be permanently damaged.

PIRG found that approximately 50% of replacement keyboards listed for Chromebooks were out of stock online.

Because of this unavailability, some IT organizations have had to buy extra batches of Chromebooks to always have components ready. This eventually led to a surge in costs, forcing schools to reconsider their decisions to include Chromebooks in their savings strategy.

Another issue with Chromebooks is the auto-update expiration date. The manufacturer, Google, guarantees eight years of automatic updates for these gizmos. However, the period begins when the Chromebook receives a certificate from Google.

In most cases, there is a gap between the certification date and the day the Chromebook is handed over to the school. By the time the institution successfully deploys the devices, the validity period narrows down to 4-5 years. When the software expires, institutes typically end up with broken Chromebooks, forcing them to buy new batches.

In addition to being a waste of money, this unsuitability hits the sustainability factor hard, producing tons of e-waste.

Due to the short shelf life, schools cannot resell these devices. In addition, they tend to shy away from the possibility of recycling, as it turns out to be quite expensive. Increasing the lifetime of Chromebook batches in 2020 (approximately 31 million) could reduce carbon emissions by 4.6 million tons. This is equivalent to removing about 900 thousand cars from the roads for a year or two.

Recommendations and response

PIRG suggested that Google remove the inconvenient automatic update expiration system, as well as improve the standardization of parts for Chromebook models.

Google is working closely with its device manufacturing partners to address issues related to recyclability, maintainability and sustainability.

PIRG also says Google should work on making the Chromebook deregistration process a little easier. In addition, remote operating systems such as Linux should be considered to make models more attractive and user-friendly. The move could help Google increase device reuse and resale.

In response to these suggestions, a Google representative, peter youstated that Google has made every effort to increase the number of years of warranty support Chromebooks receive.

In addition, the organization currently offers eight years of automatic renewals, up from five years in 2016.

Continue Reading
Advertisement

Trending