Artificial intelligence (AI) is everywhere. In the world of academia, it’s having a big impact.
When used appropriately, AI can greatly increase productivity and workflow.
It can be an asset for finding the right literature articles and filing them nicely (something every academic struggles with) or for checking the final publication is free from grammatical nightmares.
AI is also particularly well suited to the task of organising and analysing data for useful information.
MAKING LIFE EASIER
Science is, by nature, data heavy.
But with an AI companion, researchers are now able to investigate large datasets faster than ever, spot trends that would otherwise have gone unnoticed and focus their attention on what’s really important.
A great example is Google DeepMind’s AlphaFold.
Before the advent of AI, it would take several years to solve a protein structure – critical information if you want to know what a protein does and how it does it. AlphaFold can solve it in a matter of minutes.
Credit: via Google DeepMind
But that’s not what AI is most widely known for. AI is particularly good at writing.
In contrast, scientists are not known for perfect (or particularly engaging) prose.
Thanks to generative large language models (LLMs) like ChatGPT, even they can have their research read and understood by the masses.
Basically, AI can do a lot. But what happens if AI does it all?
WAIT, AI WROTE THIS?
Well, not this article. (Or any Particle article.) But researchers have been experimenting with AI to see if it really could write an academic article from start to finish.
There are many steps in the creation of a novel research paper: find a gap in the literature, ask a question, conduct experiments, analyse the data and make a conclusion.
In 2023, a pair of data scientists put their custom-built AI to the test.
It wrote all its own code, analysed a large dataset, established noteworthy trends and put it into a logical discussion. It even created its own review system to better refine the article.
All this in under an hour.
In 2024, Sakana AI Labs introduced the world to The AI Scientist, a program able to independently generate a ‘novel’ article at a cost no human research team could compete with.
Credit: abdelaziz@771/Adobe Stock. This image has been generated using AI.
While this might work well for writing a review article, there are still many areas of research that remain beyond the reach of AI – only a human knows how to pipette their life away in the lab.
HALLUCINATING ROBOT LIARS
As impressive as these new models may be, they’re still prone to hallucinations. Not in an “I see dead people” way, but in a “I don’t know the answer so I’m going to make one up” kind of way.
In other words, a hallucination is when AI generates a misleading or false statement because it doesn’t have adequate training.
A hallucination can become a recurring problem as it is incorporated into future training algorithms to be replicated forever.
Scientific sleuths hunt for these hallucinations or ‘digital fossils’ to help uncover fraudulent AI written papers and have them retracted from the scientific literature.
Robot liars are not the only problem, AI-derived results can be fraught with bias.
Because current AI tools are so tuneable, scientists must ensure they’re not training their models to just cherry pick the data they want.
Clearly, this kind of generative AI doesn’t hold up to rigorous scientific standards.
Until it can overcome the issues of hallucinations and data biases, automated AI isn’t going to win a Nobel Prize any time soon.
INSTITUTIONS UNDER THREAT
AI may not be good at producing groundbreaking new ideas, but research that builds on current knowledge is still needed, even if it’s only pointing out the obvious.
These AI written articles pose a serious threat to the peer-reviewed publication process.
It’s a system already being undermined by predatory journals and academic paper mills.
An AI model like The AI Scientist could rapidly generate articles, potentially flooding the literature with subpar ‘research’. This would make it even more difficult to spot significant results in the haystack of AI junk.
When discussing the ethics of their tool, even Sakana AI Labs admits in their open access article that “there is significant potential for misuse”.
For academic journals and universities, strong AI guidelines are being enforced to try to overcome the ethical pitfalls.
Turnitin recently released AI detection software to spot plagiarism amongst students (ironically, an AI system built to sniff out AI).
SO WILL SCIENTISTS GET ROBO-COPPED?
Science is about the non-conventional ways of looking at things, novelty and expanding human knowledge.
Despite the power of AI, human scientists still hold the upper hand.