May 18, 2022
“Robot Dreams” Class Gets a Dose of Reality
Students in Mela Frye’s elective English class “Robot Dreams” takes the students through creation myths starting with the Book of Genesis, to Shelley’s Frankenstein, Issac Asimov’s essays, and other futuristic literature. This Tuesday, the students heard via Zoom from Frye’s brother, Robert Kirkpatrick, the director of the United Nations Global Pulse who spoke about his work with AI and AI ethics. His assessment of the technology was as fascinating as it was chilling.
“I firmly believe that AI will have a greater impact on human society than the invention of fire,” Kirkpatrick said. “It is that powerful. Like fire, its power can be used for good or ill.”
Kirkpatrick has worked at the U.N. for 12 years, running Global Pulse which had developed during the global financial crisis in 2008 and 2009 to understand who was being affected around the world by this crisis. “We needed to know in real-time when people were losing their jobs, and couldn’t afford food and medical care.” He said in the past the means to collect that data was to go house to house by doing surveys. Now, the U.N.’s Global Pulse applies data science and artificial intelligence to find the patterns in that data to understand the programs and policies that are failing people.
Global Pulse’s mission revolves around three verbs: Imagine (possible risks and opportunities in the future and what can be done about it), Build (not only software but algorithms, tech standards, and legal frameworks to make sure that AI reaches its full potential and protects human rights) and Mobilize (corporations, academic, and government to solve problems on a larger scale.)
Kirkpatrick shared that fundamentally there is no flaw in AI technology, but the issues arise in how it is used. He gave an example of how a drone that helps a sustenance farmer plant seeds can make her farm more productive and profitable. However, using 150 drones at a large farm could potentially put people out of work due to its efficiency.
AI works with large sets of data to find patterns to make better decisions around the world. The highest level application is for AI to not only conduct predictive analytics but also prescriptive as well. For instance, AI could take all the behaviors in a refugee camp and simulate a digital twin. If there was an outbreak of Covid, that data could inform the U.N. whether or not to quarantine people or to put them in masks. Additionally, using speech recognition, the U.N. can track the phone calls to radio stations when people are reporting local disasters. The U.N. can take this information to supply relief to the area as quickly as possible.
As beneficial as AI is in solving the world’s problems, it does come with a dark side. Already, video and photographic “deep fakes” have distorted the idea of the truth where an individual cannot discern the real from the false. “The bigger risk is not the deep fake that everybody believes is true, but the real video that no one can prove is real,” Kirkpatrick said. “If we can no longer prove that video or photographic audio is real, then it cannot be used in court which means that all digital evidence has no legal value. That’s a very dangerous situation.”
Another pitfall of the technology is allowing AI to literally read minds. In the hands of a totalitarian government, the technology could be used to snuff out dissent and enslave people who disagree with the government’s policies. Human history shows that we don’t always get it right when we invent something powerful,” Kirkpatrick said. He encouraged the students who are thinking of pursuing AI to think through the unintended consequences of the technology and to allow for other applications of the technology that could better society.
After Kirkpatrick’s lecture, the students asked him questions about AI ethics and whether AI was necessary for human progress.
His reply is worth noting in full:
“I do think that AI is going to be a massive accelerator of human progress if we can figure out what our relationship to it is. And we haven’t, I think, as a species quite yet decided whether we want it to be a tool or we want it to be our offspring. Because there are consequences of those decisions. If it’s going to be a tool, you can’t give it enough intelligence, such that it could be considered entitled to rights–because then you’re creating slaves. And if AI is going to be our successors (and many people say there’s too much radiation in space and we’re too soft and vulnerable to travel in outer space, and it will be our machine children that travel the stars), if that’s the case, then like our children they grow up, and we have to let them go, and we have to hope that they will forgive us for passing on our flaws to them. We haven’t decided what we want to do with this technology yet, and until we figure that out, the risks are huge. At the UN, we’ve decided. At the UN, at present, this is a tool. And our approach is human dignity first. If the technology crosses a line in terms of reducing human dignity, it should not exist.”