Picture: Pexels
Techspace - Articles produced by an unnamed "AI engine" had been secretly published on the well-known technology news website CNET.
Outrage was caused by the news. Critics noted that the experiment had the appearance of trying to do away with the need for beginner authors, and that current-generation AI text generators are notoriously inaccurate.
It appeared as though CNET was attempting to hide the provocative endeavor from inspection due to the fact that the program was never publicly disclosed and the revelation that the posts were automated was tucked away beneath a human-sounding byline, "CNET Money Staff."
According to the editor-in-chief of the company, CNET started creating explainers for the website in November using artificial intelligence. It reasonably describes the entire concept as sending robots to write stories for other robots to read, given that the objective of such stories is essentially to make a push for search-engine traffic.
But until this week, when Frank Landymore at Futurism published a report indicating that the corporation had "quietly" implemented the practice, little was known about the choice. The story received a lot of attention online, which raised concerns about the future of artificial intelligence in journalism and whether it was appropriate to rely so heavily on the technology at this early stage.
The Reason Behind The Use Of AI
Picture: Kung_tom Shutterstock
The use of AI to create "simple explainers'' is a "experiment," in accordance with CNET's history of "trying new technology and separating the hype from reality," CNET editor-in-chief Connie Guglielmo said in response to the worries on Monday.
She believed that by making the change, employees will have more time and energy to devote to producing the "even more thoroughly researched stories, analyses, features, tests, and advising work were recognized for."
Guglielmo claimed in the post that each piece created by AI was checked by a human editor before going live. She said CNET has changed the bylines on the AI-generated pieces to make it obvious a robot produced them and to clearly name the editor who reviewed the material in an effort to make that process more transparent. CNET will also continue to review AI's presence on the site.
The compound interest explanation was revised by CNET with a 167-word correction less than an hour after Guglielmo's post went live, correcting mistakes so simple that an unfocused teenager might notice them.
More than two months' worth of content produced by ChatGPT have been continuously published on CNET. Up to 12 of these stories have been published on the website in a single day, and 78 in all have been published under the bylines "CNET Money Staff" and now simply "CNET Money."
At first, the publication appeared eager to keep its AI authorship a secret, notifying readers of its AI authorship only in a brief byline description on the robot's "author" page. Later, Futurism and other media started reporting on it. Commentary followed. Connie Guglielmo, the executive editor of CNET, published a statement about it.
Similar to how the publication only acknowledged publicly using AI after receiving harsh criticism, CNET did not independently identify or attempt to correct all the inaccuracies reported on Tuesday. Only when Futurism explicitly informed CNET of several of the errors did the media outlet issue a correction.
Mistakes happen to everyone, but given that CNET's AI project is still in its early stages and that this article was published the same day that the site's editor went public in reaction to a barrage of criticism, you'd expect the editors in charge of keeping an eye on the AI to be on high alert.
In other words, the problem in this situation goes beyond AI. AI is developing at a time when the journalism sector has already been destroyed by a prolonged race to the bottom, creating the ideal storm for media executives looking to reduce funding for human writers.
Apple Reportedly Will Stop Production of Its Cheapest iPhone
Leave a comment