If the internet age has anything like an ideology, it’s that more information and more data and more openness will create a better and more truthful world.
That sounds right, doesn’t it? It has never been easier to know more about the world than it is right now, and it has never been easier to share that knowledge than it is right now. But I don’t think you can look at the state of things and conclude that this has been a victory for truth and wisdom.
What are we to make of that? Why hasn’t more information made us less ignorant and more wise?
Yuval Noah Harari is a historian and the author of a new book called Nexus: A Brief History of Information Networks from the Stone Age to AI. Like all of Harari’s books, this one covers a ton of ground but manages to do it […]
This conversation is centered around both accountability and transparency, both of which are in short supply in this culture. As the article states:
“The other thing is to ban the bots from the conversations. AI should not take part in human conversations unless it identifies as an AI. We can imagine democracy as a group of people standing in a circle and talking with each other. And suddenly a group of robots enter the circle and start talking very loudly and with a lot of passion. And you don’t know who are the robots and who are the humans. This is what is happening right now all over the world. And this is why the conversation is collapsing. And there is a simple antidote. The robots are not welcome into the circle of conversation unless they identify as bots. There is a place, a room, let’s say, for an AI doctor that gives me advice about medicine on condition that it identifies itself.”
This is what is not occurring, and what the previous article illustrates with the Pentagon using our tax dollars to create fake AI accounts. The media corporations must be held accountable for the actions of their algorithms, and corporations must be held accountable to transparency by disclosing when algorithms are influencing the decisions of professionals.
As a professional, I just engaged in my first algorithmically driven inpatient pre-certification process, and it was scary. The human on the other end of the line didn’t care about clinical details. All they cared about was sandwiching my clinical answers into a 0-5 framework. She would take no nuance, just “So is the answer 0,1,2,3,4 or 5”. The algorithm spit out a 4 day authorization. So who do you argue with when the algorithm decides to deny care? This will occur with increasing frequency in healthcare, and the patient will never know. Until we commit ourselves to transparency and accountability as a society will will continue down this dark path.