Hidden below the headline was the fact that AI could actually serve to benefit the workforce in up to half of those cases – but the negatives were hard to ignore.
For example, the technology is likely to exacerbate both income and wealth inequality, creating an even wider gap between the richest and poorest countries.
On an individual level, the article also put paid to the assumption that AI will replace only menial or routine tasks, with high-skilled and well-paid careers equally at risk.
“Jobs that require nuanced judgment, creative problem-solving, or intricate data interpretation – traditionally the domain of highly educated professionals – may now be augmented or even replaced by advanced AI algorithms,” the IMF report concluded.
Does that include the news? And the jobs of the journalists who gather it?
Separating right from wrong
Combatting misinformation is already the media’s biggest challenge, and handing responsibility to AI at this point will only muddy those waters further.
There have been a few recent examples in our own industry trade press of how AI content generation – if left unchecked – can be damaging.
The first was a story in one iGaming news outlet, which falsely stated that Curaçao’s revamped gambling regulation – otherwise known as the LOK – had been rejected in parliament.
This was provably untrue, as the LOK hadn’t even been put to a vote.
The article had no author byline – which is usually a tell-tale sign of AI-generated content – and it remains live even now, suggesting the island’s gambling industry is in “turmoil” due to the results of a parliamentary vote that never occurred.
The article was likely an AI rewrite of an earlier accurate report on the LOK, with the regulation facing mounting criticism from many key stakeholders in Curaçao.
Curaçao finance minister Javier Silvania eventually went public in his condemnation of the “misinformation” surrounding the LOK rejection story.
Another crucial element notably lacking in AI-generated content is context. The next example is not inaccurate, but it is unusual.
One of the biggest media and events companies in our space republished a Financial Times article this month. Again, it was likely an AI copy, without credit or reference to the original source.The FT profile piece described GeoComply as a “Blackstone-backed business that has cornered a critical part of the US betting market”.
This makes perfect sense, because the paper is writing for a B2C audience, which is unfamiliar with GeoComply and its role as a behind-the-scenes service provider.
However, most people in gambling know exactly what GeoComply is – a dominant industry leader in the field of geolocation, particularly in the US.
The media and events company in question is also well aware of GeoComply, having crowned the provider as the KYC Solution of the Year back in 2022.
So why did they refer to GeoComply as “a little-known Canadian company” in their editorial coverage last week? It stands out on the page because it doesn’t belong.
The AI had simply taken that line from the FT article and failed to apply the context that it was writing for a specialised audience, which is admittedly more of a user command issue.
I am by no means an AI denouncer. We use the technology on an almost daily basis at NEXT.io.
We have built an AI-headline suggester as a WordPress plug-in and a news aggregation bot that sends leads directly to your email inbox as they are published.
This provides a competitive advantage, because speed is so important in our industry. The benefits here are clear, as it saves from having to manually trawl through a list of sources each day.
I would rather be ahead of the curve than behind it, because there is no doubt this software has the potential to be transformational. It is already incredibly useful for things such as SEO and foreign language translation.
But being an early adopter means you are also exposed to the many pitfalls of a technology at such an embryonic stage. We should be careful with automation to understand its limitations, before recommending it as the solution to every pain point.
Like it or loathe it, AI-generated content is here to stay.
Some of the UK’s biggest regional newspaper publishers are hiring AI editors at present, as are the biggest affiliates in online gambling.
It will change the game in terms of quantity. These outlets will be able to produce more articles and at a faster rate than ever before – and for a fraction of the cost as jobs are culled and wage bills are slashed.
I am sceptical over a consumer demand for quantity though. There is already a sea of content out there, and users will become more selective over which sources they trust, especially if misinformation is allowed to seep through the cracks.
I think the biggest opportunity of all will come in maintaining an integrity-first approach, and by putting in the work where others are tempted by shortcuts.
Truth and accuracy must remain the priority. This priority is currently at odds with what AI is able to deliver.