This article was first published on WA Today
Imagine you’re filling out an online survey about the future of your city. You thoughtfully answer each question, adding your personal insights.
Now imagine that hundreds of other participants aren’t real people at all, but bots, automated programs powered by artificial intelligence impersonating humans.
What happens when these fake responses mix with real ones?
Can we still trust the results?
This isn’t a hypothetical scenario, it’s exactly what happened during the development of the Perth 2050 report, released last year as a collaboration between Scitech and the Committee for Perth.
The report, designed to shape the city’s future, uncovered a surprising twist: bots impersonating real participants.
Although these bots were identified and filtered out, the incident raises pressing Questions.
Why are bots pretending to be us?
What other areas are they influencing and what happens if no one notices?
Promoted widely across social media, the Perth 2050 survey received more than 2000 responses.
But after a surge of activity on Facebook, it was discovered that roughly 600 of those responses came from bots.
These weren’t simple scripts: they left comments that ranged from nonsensical to eerily coherent and capable of swaying analysis if left unchecked.
While the fake responses were removed, the incident highlights a growing issue in an AI-driven world: the infiltration of bots when we expect genuine human input.
The problem isn’t limited to local surveys, with bots becoming unwelcome participants in major global conversations.
During the 2016 US presidential election, researchers found that up to 20 per cent of political discourse on Twitter (now X) was generated by bots.
They amplified polarising narratives, spread misinformation and created the illusion of widespread support for certain viewpoints.
In Australia, similar concerns have come to light with the Australian Electoral Commission raising alarms about the potential for AI-driven misinformation.
For example, Australian laws do not prevent the use of deep fake political videos.
This issue takes on added urgency as we approach a pivotal election year, with both a federal election and the Western Australian state election scheduled for early 2025.
How can we know whether comments on news articles, posts on platforms like X, or survey responses are coming from genuine people or from bots?
Without effective measures to detect and counteract bots, they can subtly and persistently manipulate public opinion, one interaction at a time.
While many bots are designed for helpful tasks, the ones that impersonate humans pose significant risks.
When they participate in surveys, drive online trends, or even interact with advertisements, they can distort the very data we rely on to make decisions.
What happens when marketing strategies are based on fake interactions?
Or when public opinion appears to shift because bots, not people, are driving the conversation?
Bots blend in; their comments, likes, and shares often appear credible, making it harder to distinguish between real and artificial influence.
AI has the power to transform lives, but misuse risks eroding trust in public debate.
Transparency, strong oversight, education, and smarter detection tools are key to safeguarding its use.
By addressing these challenges, we can harness AI’s promise to drive progress, not create confusion or to influence public discussion, which must remain a very human domain.
Let’s ensure AI works for us, not against us.
Disclaimer:
This article represents the personal views and interpretations of John Chappell. It is intended as an opinion piece and does not necessarily reflect the broader consensus of experts or the scientific community. Readers are encouraged to seek out additional perspectives and verified research for a well-rounded understanding of the topic.