28/04/2024

Addressing Academic Misconduct in Medical Sciences: The Power of Cooperation and AI-Driven Solutions

Summary

1. 📈 Academic misconduct is on the rise, with increasing instances of plagiarism and retracted papers, particularly in medical sciences. #AcademicMisconduct #Plagiarism 

2. 🔍 The current peer review system is broken, and AI-powered tools can help fix it by detecting false data and ensuring the reliability of new scientific knowledge. #PeerReview #AI 

3. 💊 In medical sciences, academic misconduct can lead to ineffective treatments and patient deaths. #MedicalSciences #PatientSafety 

4. 🤖 However, the extensive use of large language models like ChatGPT in medical research carries the risk of incorporating hallucinations and other mistakes from the model into research findings. #ChatGPT #AIrisks 

5. 📜 The integrity of the body of knowledge that scientists build upon is being eroded by the large number of retracted articles that continue to be cited. #Retractions #ScientificIntegrity 

6. The use of AI-powered tools to check research findings makes the detection of fraud and misconduct much easier. #AItools #FraudDetection 

7.  Cooperation between humans and AI-powered tools can ensure the reliability and validity of new scientific knowledge in medical sciences. #HumanAIcooperation #Science 

8. 📝 Each stage of the peer review process should be bolstered by AI-powered tools and big data techniques, with a "human-in-the-loop" to make final decisions. #PeerReviewProcess #AItools 

9. 💰 The crisis may need to worsen before new companies and organizations are created to address the issue of academic misconduct in medical sciences. #Crisis #BusinessOpportunity

10. 👥 Higher education leaders should advocate for a radical transformation of peer review that combines human cooperation with AI-driven tools for all sciences. #HigherEducation #PeerReviewTransformation

11. This approach will contribute to the development of a more solid knowledge base for our societies, and ultimately leading to better education and health outcomes, and overall prosperity for society (McKinsey 2020).

#AcademicIntegrity #MedicalResearch #AIinScience #PeerReviewReform #ScientificCredibility


Source: Burgett 20204

Background

After public pressure, some ivy universities in the USA have seen their presidents resign. The public has seen a type of opportunistic university leadership, and not leadership based on good understanding of fundamental principles like academic freedom, duty of care for the students, and institutional autonomy. This is in part of their own lack of capacity to address urgent issues or crisis in a timely manner, which recently has made matters worse, and in part because of very aggressive attacks driven by politicians with extreme ideologies. It comes as no surprise therefore that the public perception of higher education was never positive, but is now quickly getting worse.

The fake science, plagiarism and abuse of AI crises come on top of this. Recently the media has reported an alarming increase in detection of cases of plagiarism among politicians, professors, and university presidents, thanks to the improvement of AI-powered detection software. This development should serve as a wake-up call to those who plagiarized in their doctoral thesis during the 1990s or early 2000s, before plagiarism detection was widely used.  There are several excellent Youtube channels, that regularly post informative videos on issues of the credibility crises in higher education, such as for example https://www.youtube.com/@SabineHossenfelder and https://www.youtube.com/@PeteJudo1. It should also propel university leaders into action, but what to do first?

The situation is even more concerning when it comes to fake scientific papers and articles. Over the past few years, the number of retracted papers due to false or fabricated data has grown exponentially. In 2023 alone, over 10,000 sham papers had to be retracted by academic journals, leading The Guardian to report that "The situation has become appalling...fake scientific papers push research credibility to crisis point" (Ecky 2024). Made-up data, statistics which are too close for comfort and absurd images generated by AI (Medway) are being reported more and more. The image below went viral, and was published in a high-impact Frontiers journal Frontiers in Cell and Developmental Biology.

Source: Guo 2024

The peer-review article is still considered the gold standard for reliable, valid, and falsifiable new scientific knowledge. However, with the advent of new AI-powered tools, the number of articles based on false data by dishonest scientists that have had to be retracted has been skyrocketing. To make matters worse, articles that have been retracted by journals for being seriously flawed are being cited even more frequently after retraction.

This is particularly concerning in the medical sciences, where direct contributions to producing longer, healthier lives are made possible by improved medical diagnoses and treatments, including new drugs. These advancements can only be achieved if research is based on a sound body of knowledge.


Source: Nguyen 2024

However, the use of large language models like ChatGPT in medical research carries the risk of incorporating hallucinations from the model into research findings. Recently, Jeremy Nguyen, a researcher at Swinburne University of Technology in Australia, noted that in PubMed articles, words like ‘commendable’, ‘meticulous”, ‘delve’ or ‘intricate’ from 2022 to 2024 increased more than tenfold (Nguyen, 2024). This suggests that the use of ChatGPT for writing up medical research has increased substantially. Alex Stern, a technology journalist for The Guardian, explained that ChatGPT uses these words with higher than normal frequency because the model has been trained on a large proportion of content from West Africa, where low-cost workers who help train this large language model are based.

We highlighted some well-documented cases of academic misconduct and fraud, both before and after ChatGPT became available. In medical research this has led to ineffective treatments and, in some cases, the deaths of patients. Finally, we propose redesigning institutions for the production of new knowledge that harness the powers of human cooperation and make use of the most recent AI-powered tools in order to fix the antiquated and broken institution of peer review in its current form.

The problem with retractions in medical sciences

The integrity of the body of knowledge that scientists build upon is crucial for meaningful scientific progress, and this is now being eroded by the large number of retractions that continue to be cited. The growing number of retracted articles due to false data is undermining this integrity. For example, key journal articles on ivermectin and hydroxychloroquin, drugs allegedly beneficial for treating or preventing COVID, were based on fake data but continued to be cited even after their retraction. These findings were used in some countries to recommend the drugs for COVID prevention or treatment, which was at best ineffectual and at worst damaging.

Source: Marcus 2018

The table above cover all sciences and was created by one of the founders of Retractionwatch.org, Adam Marcus “A scientist's fraudulent studies put patients at risk" dates from 2018. The current number two on the retraction leader-board, the anesthesiologist Joachim Boldt is an example from the 1990s before the AI era, highlights the dangers of fake medical research and the delay in retraction. 

Further investigation into Boldt's work on hexastarch administration after surgery revealed that at least ten out of 91 evaluated studies published after 1999 included false data, with ethical issues and possible fraud remaining unclear for studies published between 1984 and 1998. This is a common pattern: once a scientist gets away with fraud, they will continue to commit it.

After Boldt's fraud was exposed, a reanalysis of the data excluding his results revealed that the substitute was actually linked to a significantly increased risk of mortality and acute kidney injury. By then, Boldt's work had been used to inform British guidelines on intravenous fluid use. It is likely that dozens of patients died because of this dangerous procedure.

Nowadays, the use of AI-powered tools to check research findings makes detection much easier, which in part explains the rise in retractions. Even in the 1990s, Boldt's fraud was easy to detect as the results were too perfect, and the standard deviations were too small.  Astoundingly calls to do something, for example to create international mechanism to identify fraudulent studies and make retraction more effective, until today have gone unheeded. 

What to do?

To their credit, over the last few years top scientific journals such as ‘Nature’ and ‘Science’ have published an extensive series of articles addressing the issue of fake science, suggesting potential improvements. However, the current proposals for peer review transformation do not seem comprehensive enough. Achieving meaningful change requires significantly more transparency in the peer review process for all parties involved, including journal editors, authors, and peer reviewers. Cooperative platforms can effectively harness the power of collaboration in evaluating manuscripts. For instance, the European Commission's Horizon Europe program already employs similar platforms to ensure transparency and efficiency in grant proposal evaluations.

Moreover, each stage of the peer review process should be bolstered by AI-powered tools and big data techniques. These tools can facilitate tasks such as:

  • identifying suitable peer reviewers, 
  • ensuring the article's subject aligns with the editorial policy, 
  • assess the robustness of declared research methods, and,
  • verifying whether the article explores a new research question or duplicates previous work. 

AI-powered tools can also do more mundane tasks such as:

  • suggest relevant citations, 
  • eliminate irrelevant ones, 
  • determine if any references have been retracted,
  • check images and tables,
  • verify the correct use of statistical methods, and 
  • confirm the correspondence between data and statistical findings. 

AI-powered tools can even analyze an article's graphs and images. However, it is essential to maintain a "human-in-the-loop" to make the final decisions.

Unfortunately, researchers are famously resistant to using digital tools, and, for example, many still make bibliographies by hand instead of using a reference manager. Publishing houses also have little incentive to embark on reforms without being forced to do so by an outside authority. To quote Australia’s former chief scientist, Dr. Cathy Foley: "We’ve set up a crazy system where publishers own and control knowledge, and we’ve let them do that...Researchers give content for free, sign over copyright, and publishers make a lot of money. You can get rubbish, nonsense, and misinformation online for free, but you have to pay for the good stuff. We need to make sure we’re getting the right information out there."

The crisis will likely need to get worse, and more lives may have to be lost before new business models and organizations are created to deal with this issue. There may be similarities between the university sector and the banking sector, another sector resistant to change. Their practices are now being disrupted by a series of financial apps, one of which advertises, "Good luck banks keeping up!".

Recommendations

We should all be gravely concerned about the cases of fraudulent research or  academic misconduct especially in medical sciences, as evidenced by to an alarming number of retracted papers and plagiarism cases. This not only undermines the credibility of scientific research but also poses serious consequences for public health. In this blog post, I highlighted recent misconduct cases that resulted in ineffective treatments and even patient deaths.

To address this issue, I propose redesigning the current institutions for knowledge production by combining human cooperation with AI-driven tools. This approach can fix the broken peer-review system and ensure the reliability and validity of new scientific knowledge. Ultimately, better science will contribute to the development of more knowledge-based societies, and better education, health outcomes, and overall prosperity.

The peer-review process must become more transparent, and each phase should be supported by AI-powered tools and big data techniques. This includes identifying the best human peer reviewers, ensuring the article's subject aligns with the editorial policy, checking for new research questions, suggesting relevant citations, and eliminating irrelevant ones. Most importantly, it involves verifying that none of the references have been retracted. AI-powered tools can also check the robustness of the declared research methods, the use of statistical methods, and analyze the article's graphs and images. However, there must always be a "human-in-the-loop" to make the final decision.

Unfortunately, researchers and existing journal publishers may resist taking up this challenge. As a result, the crisis may need to worsen before new companies and organizations are created to address the issue. There is a tremendous business opportunity in changing the business model and create more equitable and rational outcomes Nonetheless, I believe that harnessing the power of human cooperation and AI-driven tools is crucial for ensuring the reliability and validity of new scientific knowledge in medical sciences.

Key Takeaways for Higher Education Leaders

The rise of fraudulent research and academic misconduct in medical sciences poses serious consequences for public health and well-being. To address this issue, higher education leaders should advocate for a radical transformation of peer review that combines human cooperation with AI-driven tools for all sciences. It is time now to take charge of this process. A holistic approach involving all key stakeholders will ensure the reliability and validity of new scientific knowledge, ultimately leading to better health outcomes and prosperity for society.

Source: http://www.thechangeleader.com

List of references


BBC, Face-to-Face (2012, August 15). Bertrand Russell - Message To Future Generations (1959). Youtube. Retrieved from https://www.youtube.com/watch?v=ihaB8AFOhZo&t=3s

Brainard, Jeffrey, Jia You (25 OCT 2018) What a massive database of retracted papers reveals about science publishing's ‘death penalty'. (2024, April 22). Retrieved from https://www.science.org/content/article/what-massive-database-retracted-papers-reveals-about-science-publishing-s-death-penalty

Burgett, Heather (2024) Fake News vs. Real News: The Truth of the Matter. (2024, April 28). Retrieved from https://prstars.net/fake-news-vs-real-news-truth-matter

Cassidy, C. (2024). Australia’s chief scientist takes on the journal publishers gatekeeping knowledge. the Guardian. Retrieved from https://www.theguardian.com/australia-news/2024/mar/10/australias-chief-scientist-is-taking-on-the-journal-publishing-monopoly-gatekeeping-knowledge

Guo, X., Dong, L., & Hao, D. (2024). RETRACTED: Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway. Front. Cell Dev. Biol., 11, 1339390. doi: 10.3389/fcell.2023.1339390

Hern, A. (2024). TechScape: How cheap, outsourced labour in Africa is shaping AI English. the Guardian. Retrieved from https://www.theguardian.com/technology/2024/apr/16/techscape-ai-gadgest-humane-ai-pin-chatgpt

McKie, R. (2024). ‘The situation has become appalling’: fake scientific papers push research credibility to crisis point. the Guardian. Retrieved from https://www.theguardian.com/science/2024/feb/03/the-situation-has-become-appalling-fake-scientific-papers-push-research-credibility-to-crisis-point

Marcus, Adam (2018). A scientist's fraudulent studies put patients at risk. Science, 362(6413), 394. doi: 10.1126/science.362.6413.394-a

Jeremy Nguyen ✍🏼 🚢 on X: "Are medical studies being written with ChatGPT? (2024, April 20). Retrieved from https://twitter.com/JeremyNguyenPhD/status/1774021645709295840
Piller, Charles (15 April 2021) Many scientists citing two scandalous COVID-19 papers ignore their retractions. (2024, April 22). Retrieved from https://www.science.org/content/article/many-scientists-citing-two-scandalous-covid-19-papers-ignore-their-retractions

McKinsey (2020) Remes, J., Linzer, K., Singhal, S., Dewhurst, M., Dash, P., Woetzel, L., ...Ramdorai, A. Prioritizing health: A prescription for prosperity. McKinsey & Company. Retrieved from https://www.mckinsey.com/industries/healthcare/our-insights/prioritizing-health-a-prescription-for-prosperity

Ryan Zarychanskiv, M. (2013). Association of Hydroxyethyl Starch Administration With Mortality and Acute Kidney Injury in Critically Ill. JAMA, 309(7), 678–688. doi: 10.1001/jama.2013.430

Wiedermann, C. J., & Joannidis, M. (2018). The Boldt scandal still in need of action: the example of colloids 10 years after initial suspicion of fraud. Intensive Care Med., 44(10), 1735–1737. doi: 10.1007/s00134-018-5289-3


No comments:

Post a Comment

Note: only a member of this blog may post a comment.