The tech giant’s AI chatbot Bard is already notorious for serving up false information as factual.
Google is testing an AI-powered journalism product and pitching it to major news organizations, the New York Times reported on Thursday, citing three sources close to the matter. The Times was allegedly one of the outlets approached by Google.
Known internally as Genesis, the tool is capable of generating news stories based on user inputs – details of current events like who, what, where, or when, the sources said. The company allegedly sees it as “responsible technology” – a middle-ground for news organizations not interested in replacing their human staff with generative AI.
In addition to the creep factor – two executives who saw Google’s pitch reportedly called it “unsettling” – Genesis’ mechanized approach to storytelling rubbed some journalists the wrong way. Two insiders told the Times it appears to take for granted the talent required to produce news stories that are not only accurate but well-written.
A spokeswoman for Google insisted that Genesis was “not intended to… replace the essential role journalists have in reporting, creating, and fact-checking their articles” but could instead offer up options for headlines and other writing styles.
One source said Google actually viewed Genesis as more of a “personal assistant for journalists,” capable of automating rote tasks so that the writer could focus on more demanding tasks, like interviewing subjects and reporting in the field.
The discovery that Google was working on a “ChatGPT for journalism” sparked widespread concern that Genesis could open a Pandora’s Box of fake news. Google’s AI chatbot Bard quickly became infamous for spinning up complex falsehoods and offering them as truth following its introduction earlier this year, and CEO Sundar Pichai has admitted that while these “hallucinations” appear to be endemic among AI large language models, no one knows what causes them or how to keep an AI honest.
Worse, Genesis could marginalize real news if Google encourages its adoption by tweaking its search algorithms to prioritize AI-generated content, radio editor Gabe Rosenberg tweeted in response to the New York Times article.
Several well-known news outlets have dabbled with using AI in the newsroom, with less than inspiring results. BuzzFeed went from using AI to generate customized quizzes to churning out dozens of formulaic travel pieces to announcing all content would be AI-generated in under six months, despite promising its writers in January that their jobs were safe.
CNET was caught earlier this year passing off AI-written articles as human content and using AI to rewrite old articles in order to artificially increase their search engine rankings.
Despite these disasters, OpenAI, the company responsible for ChatGPT, recently began signing deals with major news organizations like the Associated Press to encourage the technology’s adoption in the newsroom.
Image credit: Cottonbro Studio
Could it be worst than the main stream media serve us as news, but is in fact propaganda ?
Computers can never generate quality papers. But can be manipulated or used for manipulations
Considering translations generated by computers: really terrible of inacuracies and misunderstandings. Always laugh when i see a google generated translation.
Language is much more complex than math