Our ability to decipher the hallmarks of artificial intelligence more important than ever
According to a recent report from NewsGuard, clickbait sites filled with AI generated content have been on the rise, threatening the state of our information ecosystem with an onslaught of clickbait and low-quality articles.
As these AI-fueled websites begin to generate more convincing content, consumers and advertisers alike will be faced with an increasingly difficult task of distinguishing human-generated content from AI-generated content.
Cutting corners and Cost
With clickbait articles, the speed and ease in which they are produced will always take precedence over quality. There is a reason why AI-generated content starts here, where website operators stand to reduce operational costs while maximizing on content production. Where there is the option to hire a group content writers as opposed to only one person capable of overseeing GPT text programs, the choice leans towards AI.
These websites, which often fail to disclose ownership or control, produce a high volume of content related to a variety of topics, including politics, health, entertainment, finance, and technology. Some publish hundreds of articles a day with content that promotes false narratives. Despite being dismissed as ‘low quality’, as with any AI debate today, the peddling of misinformation is a consequence of AI that developers will never take lightly.
That being said, there are guardrails built into generative AI software that prevent it from being abused. For example, OpenAI has built in protections in its ChatGPT software to prevent it from generating misleading content. However, these protections are far from perfect, and there is always a risk that AI tools can be easily weaponized to produce misinformation.
AI Journalistic Neutrality?
Taking into account that natural language output is at all times subject to developer bias embedded into the platform, AI is not as ‘objective’ as we’d like to think. In fact, AI-generated content could be argued to be just as biased as human-generated content, if not more.
For these websites in particular, it’s inconsequential whether humans or AI software create the content. Contingent on the quality of news consumers opt to receive, there isn’t much to be said about clickbait in general. These websites tend to be riddled with ads and other little to no factual information based on current events.
The hallmarks of artificial intelligence
Identifying content created by AI software can be difficult without using specialized tools, but in the case of the websites identified by the NewsGuard researchers, all the sites had an obvious “tell.” All 49 sites identified by NewsGuard had published at least one article containing error messages commonly found in AI-generated texts, such as “my cutoff date in September 2021,” “as an AI language model,” and “I cannot complete this prompt,” among others.
While there are tell-tale signs of AI writing patterns and logic, the proliferation of AI-fueled websites threatens to lower the quality of the information ecosystem by saturating it with clickbait and low-quality articles. In its current state, consumers and advertisers may not necessarily need to worry themselves just yet. The problem, although seemingly small, has potential to evolve into a brand of content a little less ‘recognizable’ as auto-generated.