2 recent breakthroughs in using AI for Scientific Research
What are the implications of an AI originated novel scientific breakthrough as well as an AI generated scientific paper successfully passing peer review for scientific research in the years to come
Use of artificial intelligence (AI) in research is emerging as a potentially accelerating discoveries and/or solving complex problems. Two recent developments announced within the last couple of weeks speak to this direction of travel I think that we should take note of.
Sakana AI's Peer-Reviewed Paper
Japanese startup Sakana AI recently made headlines with a bold claim: their AI system, The AI Scientist-v2, generated a scientific paper that successfully passed peer review12. While this achievement is noteworthy, we should pay attention. The paper was accepted for an ICLR workshop, which typically has less stringent criteria than a main conference track. Nevertheless, this represents a significant step forward in AI's capability to potentially contribute to scientific literature.
Google's Co-Scientist Solves Decade-Old Mystery
In a potentially even more striking development, Google's AI tool, Co-Scientist, stunned researchers by solving a decade-old mystery about antibiotic-resistant superbugs in just 48 hours3. Professor José R. Penadés and his team had been working on this problem for years, and the AI not only confirmed their unpublished findings but also suggested four additional valid hypotheses. This breakthrough demonstrates AI's potential to dramatically accelerate scientific problem-solving, supporting Scientists.
It is not difficult to imagine useful scenarios of how a co-scientist could help:
Accelerate drug discovery: By rapidly analyzing vast datasets and generating novel hypotheses, AI could significantly shorten the time required to identify promising drug candidates.
Improve literature review: AI-powered systems could quickly synthesize information from thousands of research papers, helping scientists stay up-to-date with the latest findings.
Boost productivity: By automating routine tasks, AI could allow researchers to focus on higher-level problem-solving and creative thinking.
Some challenges and concerns
While the potential benefits are exciting and we should get cautiously excited, the integration of AI into scientific research workflows also raises important concerns:
Reproducibility: AI-generated results may not always be reproducible, a cornerstone of scientific integrity. The Royal Society's report on "Science in the Age of AI" highlights this as a key issue4.
Bias and errors: AI models can inherit biases from their training data or make errors that may be difficult for human researchers to detect.
Transparency: Many AI systems, particularly deep learning models, operate as "black boxes," making it challenging to understand their decision-making processes.
Academic misconduct: There's a risk of researchers misusing AI tools or failing to properly credit their contributions, potentially leading to academic fraud4.
Data integrity: The quality of research can be compromised when data is polluted by AI-generated content, as seen in some social science studies using platforms like Mechanical Turk4.
Ensuring Responsible AI Use in Research
To harness the benefits of AI while mitigating risks, we should consider:
Developing clear guidelines: Establishing standards for the use and disclosure of AI in research is crucial.
Enhancing AI literacy: Researchers should be trained to understand AI's capabilities, limitations, and potential biases.
Implementing rigorous validation: AI-generated results should undergo thorough human review and experimental validation.
Promoting transparency: Encouraging open-source AI models and clear documentation of AI use in research to enhance reproducibility and foster trust in scientific outcomes.
Fostering interdisciplinary collaboration: Bringing together AI experts, domain scientists, and ethicists to address challenges holistically.
Critical Thinking in the Age of AI
As I discussed in my previous article, "Generative AI, Hallucinations, and Critical Thinking," it's crucial to maintain and develop our critical thinking skills when working with AI.
While AI can be an incredibly powerful tool, it's not infallible. Researchers must approach AI-generated insights with the same skepticism and rigorous verification process they would apply to any other source of information.
A few questions to continue the thinking
As we navigate this new era of AI-assisted scientific research, several questions emerge:
How can we balance the speed and efficiency of AI with the need for thorough validation and reproducibility?
What new skills will scientists need to develop to effectively collaborate with AI systems?
How might AI change the nature of scientific creativity and innovation?
What ethical frameworks should guide the use of AI in high-stakes research, such as drug discovery and development?
And you probably have more questions that should be added to the this list?
Citations:
https://royalsociety.org/news-resources/projects/science-in-the-age-of-ai/
https://www.turtlesai.com/en/pages-2480/sakana-ai-scientist-v2-paper-passes-peer-review-at
www.youtube.com/watch?v=WHZz_uuCCoY
https://techcrunch.com/2025/03/05/experts-dont-think-ai-is-ready-to-be-a-co-scientist/
https://www.yahoo.com/news/scientists-spent-10-years-cracking-120000512.html
https://engineering.princeton.edu/news/2024/05/01/science-has-ai-problem-group-says-they-can-fix-it
https://www.perplexity.ai/page/ai-scientist-generates-first-p-tJrxa9GFTR2T_AkN9jJIqw
https://scientificadvice.eu/scientific-outputs/ai-in-science-evidence-review-report/
https://www.theregister.com/2024/05/29/using_ai_in_science_can/
https://news.yale.edu/2024/03/07/doing-more-learning-less-risks-ai-research
https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdf
https://www.cni.org/news/uk-royal-society-report-on-science-and-ai
https://ukrio.org/ukrio-resources/research-your-ai-research-tools/
https://royalsociety.org/news-resources/projects/ai-and-society/