The future for AI regulation is currently being charted in the United States and will have significant effects on the health sciences, writes Vanderbilt researcher Laura Stark in a new article.
“Medicine’s Lessons for AI Regulation” was published online Dec. 9 in the New England Journal of Medicine and appears in the Dec. 14, 2023, print edition of the journal.
AI regulation is only the latest instance of the U.S. writing rules to safeguard the public as science reaches new capacities, according to Stark, associate professor of medicine, health and society and associate professor of history. Her research focuses on the social impacts of science, medicine and technology, with specialization in the history of the human sciences.
“Rules governing the treatment of human subjects have traveled a bumpy road since they were first passed in 1974,” Stark writes. “Their history holds insights for AI regulation that aims for efficiency, flexibility and greater justice.”
Stark says one example is the early debate over formal rules for the treatment of human subjects in medicine—what became the National Research Act. The debate was less about what these regulations should be than who should control them: government or science professionals.
She cites Henry K. Beecher, often considered a founder of American bioethics, as a staunch opponent of government regulation of human-subjects protections. “Instead, Beecher and his allies advocated for a renewed commitment to professional ethics, which would involve scientists retaining the power to judge the moral acceptability of their own actions,” Stark writes.
“At stake was scientific autonomy and the power of experts in a democracy. In practical terms, the issue was enforcement—specifically, whether rules regarding the treatment of human subjects would carry the force of law or only the soft discipline of colleagues.”
Beecher and his supporters ultimately lost this debate, and in the years after Congress passed the National Research Act, government administrators wrote regulations that ushered in institutional review boards, formalized consent practices and more.
Debates regarding AI have raised similar issues about professional versus governmental authority in the regulation of science, Stark says.
“In July 2023, leaders of seven top AI companies made voluntary commitments to support safety, transparency and antidiscrimination in AI. Some leaders in the field also urged the U.S. government to enact rules for AI, with the stipulation that AI companies set the terms for regulation,” she writes.
“AI leaders’ efforts to create and guide their own oversight mechanisms can be assessed in a similar light to Beecher’s campaign for professional autonomy. Both efforts raise questions about enforcement, the need for hard accountability, and the merits of public values relative to expert judgment in a democracy.”
The past offers valuable lessons relevant to the burgeoning future of AI, Stark says.
“The history of human-subjects research suggests that it will be important to keep rules for AI as nimble as the science they regulate,” she writes. “[R]egulation of AI is best envisioned as an ongoing project, to ensure that new rules emerge alongside new scientific possibilities and political contexts.”
More information:
Laura Stark, Medicine’s Lessons for AI Regulation, New England Journal of Medicine (2023). DOI: 10.1056/NEJMp2309872
Journal information:
New England Journal of Medicine
Source: Read Full Article