Legal liability for innovations in nanomedicines and medical robotics

Writers Work - Get Paid to Write

By Ammar Younas

Emerging medical technologies and innovative clinical practices such as production and transplant of Artificial Organs, use of Bio-Robotics, Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR), Augmented Reality, 3D Printing, Wireless Brain Sensors and Nanomedicine are posing a significant challenge for the legal system as a whole and  Drug Regulatory Frameworks in particular. The gap between legal progression and technological innovations is increasing and there is no doubt that the legal side is at fault.

For example, Nanomedicine, comparatively, is at its advanced stages of understanding in Asia, not well integrated or accommodated by Asia’s medicolegal scholarship because of the related uncertainties in development trajectories of nanotechnology, product properties of nanomedicines, potential risks of nanocomponents in their administration and commercial regulations of nanomedicine at a larger scale.

There is no debate in legal and medical circles about the efficiency and potential benefits of these innovative nanomedical products with completely new characteristics and their functions with enormous potential in a wide range of applications with significant positive impact on healthcare. In South and Central Asia, legal scholars see almost all the medical innovations through the prism of superstition because of the upcoming cases of medical negligence. The major concern for now is that the nanomedicine “as subject matter” spread its tentacles on the drug act, intellectual property law, commercial and trade law, tech law and environmental law as well.

The increasing complexity and hybrid nature of nanotechnologies impact the functionality of “law in action” which can lead to legal uncertainty and ultimately to a public distrust. The nature of nanomedicine is challenging current classifications of medical knowledge, and existing legal regulations worldwide. A minor misunderstanding of nanomedicine by policy and law makers can decrease the inertia of nanomedical research.

There is an immediate need of collaboration between nanomedical scientists and academic lawyers for the harmonization of technologies in legal system and to discuss conceptual bases to better fulfill the legal language acceptable for scientists and technologists. Since finding the sustainable bases for a responsible development of medical technologies is a legal concern of legal scholars as well.

Besides nanomedicines, use of robots in surgery is also creating concerns among legal community. Humanoid robots with artificial intelligence are already mimicking humans whereas the development of surgeon robots with artificial intelligence (AI) is an exciting, inexorable reality which is no more fiction but a real medical practice. Artificial intelligence in such robots is not only making them functionally independent but also raising concern for legal scientists to address the upcoming complex issues especially related to medical malpractice by humanoid robots equipped with AI. The purpose to address this concern is to suggest a balanced regulatory approach to medical robotics that should promote innovation, while at the same time define the boundary for the protection of individuals and the human community at large.

The increased autonomy of robots raises questions regarding their legal responsibility. For example, the European Union Parliament has recommended to the European Commission to create a separate legal personality of ‘Electronic Person’ to make them responsible for any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently. With the advancement of Quantum Computing and 5G technology, the interaction between AI Robots, Smart Medical Instruments and other advanced digital technologies will increase significantly and autonomous decisions of robots will make human more prone to medical malpractices.

From legal philosophy point of view, I think that there is no need to treat robots as a personality but at the initial stages AI bodies in general and surgeon medical robots, in particular, should be regarded as property to limit health and safety risks to human beings. Saudi Arabia has given its passport to humanoid robot ‘Sofia’ which grants it legal rights, but we cannot make it liable for its negligence if it operates a human. As robotic scientists are not sure themselves that when they will be able to make fully conscious robots, robots cannot be held liable per se for acts or omissions that cause damage to other parties as they are a machine and therefore liability rests on the owner and producer. Likewise, it is the duty of applied scientists to seek the timely recommendations of legal scientists regarding innovations which can cause legal complications in future.

Show More

Ammar Younas

Ammar Younas is an ANSO scholar at School of Humanities, University of Chinese Academy of Sciences. He is based at Institute of Automation, Chinese Academy of Sciences. He studied Chinese Law as Chinese Government Scholar at Tsinghua University School of Law in Beijing, China. Ammar also holds degrees in Medicine, Jurisprudence, Finance, Political Marketing, International and Comparative Politics and Human Rights from Kyrgyzstan, Italy, and Lebanon. His research interests include but not limited to Societal Impact of Artificial Intelligence (AI), Regulation of AI & Emerging Technologies, and Central Asian Law.

Related Articles

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker