Why AIs won’t take over the world, according to Ath Speaker Apoorv Agarwal
November 22, 2017
by Sabrina Hartono
As an avid Athenaeum-goer, I was particularly drawn to Apoorv Agarwal’s Ath presentation on artificial intelligence Monday, Nov. 13, because I was skeptical that technology could ever augment areas that generally require abstract human judgment. For example, literature, law, and the arts are subjects associated with creative thinking, emotions, and morality, all of which are traits we assume that machines lack. It seemed like I wasn't the only one. Members of the audience I spoke to shared a similar desire to have their views challenged by Agarwal. A friend at the Ath dinner table shared her thoughts about the American Musicological Society annual meeting she attended the previous weekend. The centerpiece of the musicology exhibition was a presentation on Experiments in Musical Intelligence (EMI, pronounced “Emmy”), a computer program that was tested in a Bach-music composing competition against University of Oregon musician and professor of music theory, Dr. Steve Larson, in 1997. The audience’s verdict was rather amusing: Dr. Larson’s Bach imitation piece was thought to be written by EMI, and the original Bach was thought to be written by Dr. Larson, and the EMI piece was thought to be the original Bach.
This brings up the curious question of the evolving role of scientific technology in more “human” fields like the arts, which Agrawal discussed during his talk: can artificial intelligence (AI) augment our understanding of literature and change the practice of law?
At the start of the talk, Agarwal, an expert in artificial intelligence and natural language processing, set out two main tasks for his talk: first, to encourage more people to understand how computers work, and secondly, to remove anxiety around the possibility of AIs taking over the world.
Agarwal started his presentation on automation and the increasingly-mechanized world we live in. “Where can machines replace humans?” He asked. From his research of the finance world, data collection and processing is a key field with high potential for automation, largely due to the fact that everything is very systematic. In areas with more abstract human interactions, such as stakeholder interaction and human resource management, the potential for automation is only 15 percent.
As co-founder and CEO of Text IQ, Agarwal has extensive experience in sentiment analysis, relation extraction, text summarization, and automated Q&A. With this background, he has explored how AI technology can be utilized to understand how people use language to create social and organizational relations.
“Now, how does Text IQ relate to Jane Austen?” Agrawal asks, rhetorically. He proceeds to explain that “literature is rich with complete information on various interactions between characters, making books a great testing ground for Text IQ’s AI technology.”
Agarwal and his team developed machine learning techniques that utilize language analysis in conjunction with human interactions for legal and compliance aspects of a business, such as attorney-client privilege communications.
As Agarwal concluded his talk on just how advanced AI technology has become, the audience grew increasingly convinced that AIs would eventually outsmart us. Nevertheless, Agarwal pressed that it is actually a great time for humans and machines to be working together. Why? As much as AIs continue to improve, they will not achieve self-consciousness.
“Self-consciousness would require complete 'evolution' of technology,” Agrawal said, “and humans would need to simulate such evolution.” But since this evolution will be in a controlled environment, it is unclear how “self-conscious machines” would react in an uncontrolled environment.
Agarwal definitely achieved the two goals he set out for himself an hour earlier. It appears that CS5 may be in even more high-demand than it already is. But as the audience filed out of the Athenaeum, Agarwal’s optimism seemed to be outweighed by lingering anxiety: the question is no longer a matter of whether or not human-like machines will evolve and replace us, but rather when.