≡ Menu

Who’s Liable For Decisions AI and Robotics Make?

An anonymous reader shares a BetaNews article: Reuters news agency reported on February 16 that “European lawmakers called […] for EU-wide legislation to regulate the rise of robots, including an ethical framework for their development and deployment and the establishment of liability for the actions of robots including self-driving cars.” The question of determining “liability” for decision making achieved by robots or artificial intelligence is an interesting and important subject as the implementation of this technology increases in industry, and starts to more directly impact our day to day lives. Indeed, as application of Artificial Intelligence and machine learning technology grows, we are likely to witness how it changes the nature of work, businesses, industries and society. And yet, although it has the power to disrupt and drive greater efficiencies, AI has its obstacles: the issue of “who is liable when something goes awry” being one of them. Like many protagonists in industry, Members of the European Parliament (MEPs) are trying to tackle this liability question. Many of them are calling for new laws on artificial intelligence and robotics to address the legal and insurance liability issues. They also want researchers to adopt some common ethical standards in order to “respect human dignity.”

Read more of this story at Slashdot.

{ 0 comments… add one }

Leave a Comment

Home | About | Contact | Disclaimer | Terms | Privacy

Copyright © 2017 by Tom Connelly | All Rights Reserved