• Skip to main content

Biz Builder Mike

You can't sail Today's boat on Yesterdays wind - Michael Noel

  • Tokenomics is not Economics – Digital CX -The Digital Transformation Chapter 1
  • Resume / CV – Michael Noel
  • Contact Us
  • Featured

Technology

Mar 31 2023

Graphcore says £900mn UK supercomputer should use its chips

Graphcore, one of the UK’s most valuable tech start-ups, is demanding a “meaningful” portion of the government’s new £900mn supercomputer project uses its chips, as it battles US rivals such as Nvidia.

Chief executive Nigel Toon told the Financial Times that Graphcore is writing to UK ministers over the issue, saying a deal is crucial at a time its Silicon Valley investors are pressing the Bristol-based company to consider a move to America and take advantage of generous US semiconductor subsidies.

His comments come after UK chancellor Jeremy Hunt’s announcement in March’s budget of a £900mn investment to build an “exascale” supercomputer that would be “several times more powerful” than what is currently available in the UK. The machine is intended to facilitate research on climate change, drug discovery and AI.

Toon said that failing to include artificial intelligence chips designed by Graphcore — which he argued is the “only company in Europe with this technology and capability” — risked “heading towards a world of tech colonisation” by US companies such as Nvidia. He added this could cause the UK to “lose some of our tech sovereignty”.

In a letter seen by the FT and sent to Hunt, prime minister Rishi Sunak and science minister Michelle Donelan, Toon writes: “Too often we have seen British-made innovation leading the world, only to be edged-out or bought-out by overseas rivals.”

He adds: “We are concerned that unless a significant portion of the budget is explicitly earmarked for UK-based suppliers, this funding commitment will quickly be consumed by digital giants like US-based chipmaker Nvidia.”

Toon is asking the government to reserve a “large percentage” of the budget for “homegrown UK technology companies”, which could also include Arm, the Cambridge-based chip designer owned by Japan’s SoftBank. However, he conceded that there are few, if any, British providers of AI chips other than Graphcore that would benefit.

Toon insisted that Graphcore was “not asking for a handout . . . We are asking for a commercial engagement”. The company has offered to reserve for the UK exascale project up to 3,000 of its processors which are “installed and ready to go” in a data centre in South Wales.

Graphcore’s IPU chips were designed specifically for AI applications and have been used by customers and researchers in the finance, manufacturing, pharmaceutical and automotive sectors, as well as the US government’s Pacific Northwest National Lab.

In its latest fundraising in December 2020, Graphcore raised $222mn at a $2.5bn valuation, making it one of the UK’s most valuable private tech companies. The group reported annual revenues of $5mn and pre-tax losses of $184.5mn in the year to 2021, in its most recent filings to the UK’s Companies House registry.

Graphcore was dealt a blow in 2020 when Microsoft, an early investor, decided not to continue using Graphcore’s IPUs in its Azure cloud computing platform. Microsoft relied on Nvidia’s GPU chips for the supercomputing resources it built as part of its deal with ChatGPT developer OpenAI.

Graphcore and other AI semiconductor start-ups such as Cerebras and Mythic have struggled to dent the momentum of Nvidia, which has a market capitalisation of $676bn — more than five times larger than Intel, which it overtook as the most valuable US chipmaker in 2020.

As the Biden administration begins accepting applications for a share of the $39bn federal funding for semiconductor makers under the country’s Chips Act, Toon said that the topic of moving to the US “comes up frequently” at Graphcore’s board meetings.

He said it is “very clear” that there are “large amounts of money available” for deep tech companies relocating there. But Toon added that Graphcore was “quite settled” in the UK.

“We’ve got a strong talent pool,” Toon said, adding: “We quite like a warm beer.”

Written by Tim Bradshaw in London · Categorized: entrepreneur, Technology · Tagged: entrepreneur, Technology

Mar 31 2023

We need a much more sophisticated debate about AI

The writer is a barrister and the author of ‘The Digital Republic: On Freedom and Democracy in the 21st Century’

The public debate around artificial intelligence sometimes seems to be playing out in two alternate realities.

In one, AI is regarded as a remarkable but potentially dangerous step forward in human affairs, necessitating new and careful forms of governance. This is the view of more than a thousand eminent individuals from academia, politics, and the tech industry who this week used an open letter to call for a six-month moratorium on the training of certain AI systems. AI labs, they claimed, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Such systems could “pose profound risks to society and humanity”. 

On the same day as the open letter, but in a parallel universe, the UK government decided that the country’s principal aim should be to turbocharge innovation. The white paper on AI governance had little to say about mitigating existential risk, but lots to say about economic growth. It proposed the lightest of regulatory touches and warned against “unnecessary burdens that could stifle innovation”. In short: you can’t spell “laissez-faire” without “AI”. 

The difference between these perspectives is profound. If the open letter is taken at face value, the UK government’s approach is not just wrong, but irresponsible. And yet both viewpoints are held by reasonable people who know their onions. They reflect an abiding political disagreement which is rising to the top of the agenda.

But despite this divergence there are four ways of thinking about AI that ought to be acceptable to both sides.

First, it is usually unhelpful to debate the merits of regulation by reference to a particular crisis (Cambridge Analytica), technology (GPT-4), person (Musk), or company (Meta). Each carries its own problems and passions. A sound regulatory system will be built on assumptions that are sufficiently general in scope that they will not immediately be superseded by the next big thing. Look at the signal, not the noise.

Second, we need to be clear-eyed in the way we analyse new advances. This means avoiding the trap of simply asking whether new AI systems are like us, then writing them off if they are not.

The truth is that machine learning systems are nothing like us in the way they are engineered, but they are no less significant for it. To take just one example: the fact that non-human AI systems, perhaps with faces and voices, will soon be able to participate in political debate in a sophisticated way is likely to be more important for the future of democracy than the fact that they do not “think” like humans. Indeed, asking whether a machine learning system can “think” like a human is often as useful as asking whether a car can gallop as fast as a horse.

Third, we should all by now recognise that the challenges thrown up by AI are political in nature. Systems that participate in, or moderate, the free-speech ecosystem will inevitably have some impact on the nature of our democracy. Algorithms that determine access to housing, credit, insurance or jobs will have real implications for social justice. And the rules that are coded into ubiquitous technologies will enlarge or diminish our liberty. Democracy, justice, liberty: when we talk about new technologies, we are often talking about politics, whether we realise it or not. The digital is political.

Finally, talk of regulation should be realistic. There was something naive about the implication in the open letter that the problems of AI governance might be substantially resolved during a six-month moratorium. The UK government probably won’t have reported its consultation results within six months, still less enacted meaningful legislation. At the same time, if we wait for the US, China, and the EU to agree rules for the governance of AI, we are going to be waiting forever. The problem of regulating AI is a generational challenge. The solutions will not be summoned forth in a short burst of legislative energy, like a coder pulling an all-nighter.

We will need a new approach: new laws, new public bodies and institutions, new rights and duties, new codes of conduct for the tech industry. Twentieth-century politics was defined by a debate about how much human activity should be determined by the state, and what should be left to market forces and civil society. In this century, a new question is coming to the fore: to what extent should our lives be directed by powerful digital systems, and on what terms? It will take time to find out if our public discourse can rise to the level of sophistication needed.

 

Written by Jamie Susskind · Categorized: entrepreneur, Technology · Tagged: entrepreneur, Technology

Mar 31 2023

Italy temporarily bans ChatGPT over privacy concerns

Italy’s privacy watchdog has temporarily banned the popular artificial intelligence service ChatGPT made by Microsoft-backed OpenAI, as policymakers across the world seek to respond to the rise of AI products.

The nation’s data protection authority on Friday said it would block access to the chatbot in Italy, while it examines the US company’s collection of personal information during a recent cyber security breach among other issues.

The move comes as AI experts and ethicists sound the alarm about the enormous amounts of data that services such as ChatGPT consume from tens of millions of users around the world, raising concerns about how companies may jeopardise their privacy and safety.

Italy’s watchdog launched an investigation into OpenAI after a cyber security breach last week led to people being shown excerpts of other users’ ChatGPT conversations and their financial information.

The data exposed for a nine-hour period included first and last names, billing addresses, credit card types, credit card expiration dates and the last four digits of their credit cards, according to an email sent by OpenAI to an affected customer, and seen by the Financial Times.

The Rome-based watchdog said OpenAI, led by chief executive Sam Altman, should stop operating ChatGPT in Italy shortly, in accordance with the order, after which it will have 20 days to respond with counter-evidence. If OpenAI fails to respond within the deadline it could face a fine of up to €20mn.

A spokesperson for OpenAI said: “We have disabled ChatGPT for users in Italy at the request of the Italian Garante [the data protection authority]. We are committed to protecting people’s privacy and we believe we comply with GDPR and other privacy laws.”

The regulator has acted against chatbots previously, having banned Replika.ai in February. That service is known for users seeking to generate conversations that discuss erotic scenarios.

Italy’s move represents the first regulatory action against the popular chatbot, with policymakers across the world seeking to respond to the rise of generative AI services.

Experts have been concerned about the huge amount of data being hoovered up by language models behind ChatGPT. OpenAI had more than 100mn monthly active users two months into its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, was being used by more than 1mn people in 169 countries within two weeks of its release in January.

This week the likes of Elon Musk and Yoshua Bengio, one of the founding fathers of modern AI methods, called for a six-month pause in developing systems more powerful than the newly launched GPT-4, citing major risks to society.

Some industry experts and insiders said the call was hypocritical and it was merely a way to allow AI “laggards to catch-up” with OpenAI, at a time when large tech companies are competing aggressively to release AI products such as ChatGPT and Google’s Bard.

The Italian regulator said it launched an investigation after noting the “absence of a legal basis that justifies the mass collection and storage of personal data, for the purpose of ‘training’ the algorithms” underlying ChatGPT.

It also said that, according to its internal analysis, ChatGPT did “not always provide accurate information”, which leads to a misuse of personal information.

OpenAI has previously said it has resolved cyber security issues related to the leak of information. However, OpenAI will be blocked from processing Italian users’ data through ChatGPT while the probe is in progress.

The regulator also criticised OpenAI’s lack of a filter to verify that children under 13 were not using its service. Specifically, the watchdog claimed underage children were being exposed to content and information that was not appropriate for their “level of self-consciousness”.

Generative AI technologies fall under the regulatory purview of existing data and digital laws such as the GDPR and the Digital Services Act, which oversee some aspects of it. However, the EU is preparing a regulation that will govern how AI is used in Europe, with companies that violate the bloc’s rules facing fines of up to €30mn, or 6 per cent of global annual turnover, whichever is larger.

Written by Madhumita Murgia in London and Silvia Sciorilli Borrelli in · Categorized: entrepreneur, Technology · Tagged: entrepreneur, Technology

Mar 31 2023

GIC/Works Human: Japan’s reliance on floppy disks creates a need for cloud cover

Floppy disks and fax machines are still very much in use in Japan. Old-fashioned business habits have damped demand for enterprise software and cloud computing. Private equity interest in the sector is a sign that the pace of digital transformation could be set to pick up.

Singapore sovereign wealth fund GIC has invested in Works Human Intelligence. It will jointly own the Japanese human resources software provider with private equity firm Bain Capital. The deal values Works Human at Y350bn ($2.6bn). That is more than triple the Y100bn paid by Bain for the business in 2019. GIC takes over about half of Works Human’s shares. The rest are held by company executives and Bain.

The Works Human Intelligence transaction is the largest private equity acquisition of a software company in Japan. Until now, corporate Japan’s slow digital uptake militated against large deals in the sector. The move by GIC, which has mostly focused on real estate deals in Japan, suggests the enterprise cloud sector’s growth prospects look relatively promising.

The global cloud market is dominated by Amazon. Its cloud business has slowed but still grew by a fifth in the fourth quarter. Though its operating income for the quarter dropped marginally on the previous year, it was almost double the profit number for the full company. Its strong customer pipeline suggests further growth in the coming months.

In Japan spending on public cloud services accounted for just 4 per cent of total IT expenses in 2021, less than half that of North America and Europe. In China, the leading cloud provider Alibaba is already near market saturation domestically.

Shares of Fujitsu, one of Japan’s largest enterprise cloud service providers, are up more than a tenth in the past six month. But they still trade at a conservative 13 times forward earnings, a steep discount to global peers. That leaves ample room for growth in Japan.

Written by bizbuildermike · Categorized: entrepreneur, Technology · Tagged: entrepreneur, Technology

Mar 30 2023

Japan restricts semiconductor tool exports as China chip war intensifies

Japan plans to impose export restrictions on 23 types of equipment used to make semiconductors, following similar curbs by the US designed to restrict China’s access to cutting-edge chips in an intensifying battle over the technology.

The move by Japan fulfils its side of a three-way agreement with the US and Netherlands that would significantly curtail China’s ability to import equipment used to produce the most advanced types of semiconductors.

Japan has avoided any formal public reference to that agreement, as geopolitical tensions and US-China decoupling have raised pressure on Japanese companies to work out a strategy that allows them to straddle both markets.

In a press conference on Friday morning, Japan’s trade minister Yasutoshi Nishimura said the controls would cover six categories of equipment used in chipmaking that include the most specialised areas of lithography and etching.

The ministry did not explicitly mention China in its release, but Nishimura said the restrictions were part of Japan’s responsibility as a technological nation to contribute to international peace and stability.

“We do not have one particular country in mind with these measures,” said Nishimura.

However, Japanese officials said the scope of its restrictions went further than those imposed last year by the US. Equipment exporters would need licenses for all regions, giving the ministry oversight on the sale of equipment to third-party countries that could in theory produce high-end chips for military use.

“By expanding the regions that will be covered by the measures, we wanted to address a broader range of risks associated with advanced semiconductor technology,” one of the officials said. “China is not the only risk out there.”

Applied Materials in the US, Dutch group ASML and Japan’s Tokyo Electron globally dominate in equipment for producing the highest-end chips used in supercomputers and artificial intelligence.

The restrictions, which come into effect in July, will affect a broader range of companies than previously expected. People familiar with negotiations previously said the controls would chiefly affect Tokyo Electron and Nikon, but people with knowledge of the measure said the list of affected companies would be roughly 10 and could include blue-chip tech group Advantest.

In January, the Netherlands and Japan reached a deal with the US aimed at cutting off China from the most advanced chips that could be used in sophisticated weaponry and machines, but Japanese and Dutch officials had disclosed few details until this month.

Prior to the January agreement, the US imposed a series of draconian restrictions on the export of chipmaking equipment to China, but officials had said privately that the overall impact of the scheme would only bite if it were matched by similar moves from Japan and the Netherlands.

Written by Leo Lewis and Kana Inagaki in Tokyo · Categorized: entrepreneur, Technology · Tagged: entrepreneur, Technology

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 67
  • Go to Next Page »
  • Twitter
  • Facebook
  • About Us
  • LinkedIn
  • ANTI-SPAM POLICY
  • Google+
  • API Terms and Conditions
  • RSS
  • Archive Page
  • Biz Builder Mike is all about New World Marketing
  • Cryptocurrency Exchange
  • Digital Millennium Copyright Act (DMCA) Notice
  • DMCA Safe Harbor Explained: Why Your Website Needs a DMCA/Copyright Policy
  • Marketing? Well, how hard can that be?
  • Michael Noel
  • Michael Noel CBP
  • Noels Law of decentralization

Copyright © 2023 · Altitude Pro on Genesis Framework · WordPress · Log in