Alphabet CEO’s Sundar Pichai on Monday:
It’s a really exciting time to be working on these technologies as we translate deep research and breakthroughs into products that truly help people. That’s the journey we’ve been on with large language models. Two years ago we unveiled next-generation language and conversation capabilities powered by our Language Model for Dialogue Applications (or LaMDA for short).
We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard. And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.
Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.
LONDON, Feb 8 (Reuters) – Google published an online advertisement in which its much-anticipated AI chatbot Bard delivered an inaccurate answer.
The tech giant posted a short GIF video of Bard in action via Twitter, describing the chatbot as a “launchpad for curiosity” that would help simplify complex topics.
In the advertisement, Bard is given the prompt: “What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year old about?”
Bard responds with a number of answers, including one suggesting the JWST was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets. This is inaccurate.
The first pictures of exoplanets were taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004, as confirmed by NASA.
The error was spotted hours before Google hosted a launch event for Bard in Paris, where senior executive Prabhakar Raghavan promised that users would use the technology to interact with information in “entirely new ways”.
Hang on though. Bard was not, on a technical level, wrong.
The telescope did take “the very first pictures of a planet outside of our own solar system.” The planet is called LHS 475 b.
Not only was it the JWST’s very first sighting of an exoplanet, it was the first time ever that this specific planet had been pictured. The news was considered a big deal at the time, a month ago, and remains the kind of thing a 9-year old may well find interesting:
The NASA/ESA/CSA James #Webb Space Telescope has confirmed the presence of an exoplanet for the first time. Formally classified as LHS 475 b, the planet is almost exactly the same size as our own, clocking in at 99% of Earth’s diameter.
👉 https://t.co/3KPq5ynh5H#JWST pic.twitter.com/xU7grtICCl
— ESA (@esa) January 11, 2023
At pixel Alphabet has erased approximately $50bn of market value not because of an astronomical blind spot but because we, mere meatsacks, were collectively unable to understand its answer. In the field of advanced AI, this kind of thing seems to happen quite often.
A rival bard commented earlier:
Not from the stars do I my judgement pluck;
And yet methinks I have Astronomy,
But not to tell of good or evil luck,
Of plagues, of dearths, or seasons’ quality;
Nor can I fortune to brief minutes tell,
Pointing to each his thunder, rain and wind,
Or say with princes if it shall go well
By oft predict that I in heaven find.
The fault, dear Google, is not in our stars but in ourselves Republished from Source https://www.ft.com/content/16986b1e-b96a-4cb5-9450-b96fae622fdd via https://www.ft.com/companies/technology?format=rss