AI models are, apparently, getting better at lying on purpose.
Two recent studies — one published this week in the journal PNAS and the other last month in the journal Patterns — reveal some jarring findings about large language models (LLMs) and their ability to lie to or deceive human observers on purpose.
In the PNAS paper, German AI ethicist Thilo Hagendorff goes so far as to say that sophisticated LLMs can be encouraged to elicit “Machiavellianism,” or intentional and amoral manipulativeness, which “can trigger misaligned deceptive behavior.”
“GPT- 4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time,” the University of Stuttgart researcher writes, citing his own experiments in quantifying various “maladaptive” traits in 10 different LLMs, most of which are different versions within OpenAI’s GPT family.
Billed as a human-level champion in the political strategy board game “Diplomacy,” Meta’s Cicero model was the subject of the Patterns study. As the disparate research group — comprised of a physicist, a philosopher, and two AI safety experts — found, the LLM got ahead of […]
I had the opportunity to read the articles upon which this one was based. The ability of AI programming to do this is certainly disturbing. Combine this, however, with the job displacement which is occurring now, and will only accelerate is a recipe for incredible social disruption. If you don’t think that the captains of commercialism aren’t working hard to exploit this and game the system you’re kidding yourself. It long past time that we put humans first. By that I mean real biological humans, not the artificial creations we call corporations. Until we put humans first we will continue to experience degradation economically, politically, socially, and eventually internationally.