Yann LeCun, Meta’s chief AI Scientist, has warned that liability for mass casualties caused by AI will destroy the industry.
 Credit: Chesnot / Getty

If I build a car that is far more dangerous than other cars, don’t do any safety testing, release it, and it ultimately leads to people getting killed, I will probably be held liable and have to pay damages, if not criminal penalties.

If I build a search engine that (unlike Google) has as the first result for “how can I commit a mass murder” detailed instructions on how best to carry out a spree killing, and someone uses my search engine and follows the instructions, I likely won’t be held liable, thanks largely to Section 230 of the Communications Decency Act of 1996.

So here’s a question: Is an AI assistant more like a car, where we can expect manufacturers to do safety testing or be liable if they get people killed? Or is it more like a search engine?

This is one of the questions animating […]

Read the Full Article