The press freedom group Reporters Without Borders is urging Apple to remove its newly introduced artificial intelligence feature that summarizes news stories after it produced a false headline from the BBC, according to CNN.
The backlash comes after a push notification created by Apple Intelligence and sent to users last week falsely summarized a BBC report that Luigi Mangione, the suspect behind the killing of the UnitedHealthcare chief executive, had shot himself.
The BBC reported it had contacted Apple about the feature “to raise this concern and fix the problem,” but it could not confirm if the iPhone maker had responded to its complaint.
On Wednesday, Reporters Without Borders technology and journalism desk chief Vincent Berthier called on Apple “to act responsibly by removing this feature.”
More broadly, the journalist body said it is “very concerned about the risks posed to media outlets by new A.I. tools, noting that the incident emphasizes how A.I. remains “too immature to produce reliable information for the public, and should not be allowed on the market for such uses.”
In response to the concerns, the BBC said in a statement, “it is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications.”Apple introduced its generative-AI tool in the US in June, touting the feature’s ability to summarize specific content “in the form of a digestible paragraph, bulleted key points, a table, or a list.” To streamline news media diets, Apple allows users across its iPhone, iPad, and Mac devices to group notifications, producing a list of news items in a single push alert.
Since the AI feature was launched to the public in late October, users have shared that it also erroneously summarized a New York Times story, claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested. In reality, the International Criminal Court had published a warrant for Netanyahu’s arrest, but readers scrolling their home screens only saw two words: “Netanyahu arrested.”
Apple’s AI troubles are only the latest as news publishers struggle to navigate seismic changes wrought by the budding technology. Since ChatGPT’s launch just over two years ago, several tech giants have launched their own large-language models, many of which have been accused of training their chatbots using copyrighted content, including news reports. While some outlets, including The New York Times, have filed lawsuits over the technology’s alleged scaping of content, others — like Axel Springer, whose news brands include Politico, Business Insider, Bild and Welt — have inked licensing agreements with the developers.
No comments:
Post a Comment