Artificial & Augmented Intelligence: Where Are We Today? Where Are We Heading?

Topics: AI, blockchain, Business Development & Marketing Blog Posts, Client Relations, Data Analytics, Efficiency, Law Firms, Legal Executive Events, Legal Innovation, Midsize Law Firms Blog Posts, Thomson Reuters

robots

NEW YORK — A variety of new technologies on display at a recent legal tech conference, mostly built on forms of artificial intelligence (AI) and machine learning, had at least one factor in common: All centered around doing tasks much faster than humans, allowing those humans more time or information for making important decisions.

At the Thomson Reuters Emerging Tech Conference held December 1, panelists and speakers discussed several types of AI-aided platforms and products that were presented by various start-ups, including blockchain technology. The panelists were excited about all the potential problems that could be solved by incorporating blockchain, especially the fact that it can provide an identity for 1.5 billion people who, according to the United Nations, have no legal “identity”. That lack-of-status makes it hard for such people to enter transactions because the person on the other side of the deal is unable to make a judgment about whether the person is a risk.

Data scientists from Microsoft and IBM Watson, along with Julian Togelius, an Associate Professor in Computer Science & Engineering at New York University, tried to define artificial intelligence as an attempt to improve computers’ capabilities to extend humans’ own abilities to work with unstructured data. While machine learning, deep learning and cognitive computing are all parts of that, the overall goal is to speed up the rapid analysis of vast sets of data and provide extractions or recommendations so that humans can make conclusions and decisions based on data more rapidly than by previous methods of analysis.

robot

A key theme for the non-data scientists at the conference was how quickly AI will develop to remove many more tasks from their human masters. Indeed, the moderator asked several times when AI will improve to the point where he could say into his smartphone, “Please buy me a flight from New York to LA tomorrow afternoon” and the task would be done.

Steve Abrams, Vice-President of Developer Advocacy for IBM Watson, noted that if you look at AI from an enterprise perspective, you can see that not too distant future voice-enabled systems could allow employees using simple voice commands to instantly set up video conferences and document sharing, check the status of a pending deal or confirm who a colleague’s supervisor is — all things that currently require employees to individually open applications or systems. Abrams suspects such technology could be commonly available in the workforce in less than five years.

Already we are seeing various industries using AI in many types of applications; for example, Abrams noted that in the medical field, a computer can immediately provide an oncology physician a newly-presenting patient’s entire medical history, symptoms and suggested diagnoses. Additionally, the AI program can scan thousands of new pages of medical journal articles released each day and suggest any new information that may be relevant. In another example, AI can create a predictive model of water usage in areas suffering droughts, basing the model on visual recognition of trees, pools and roof leaks, and allowing city planners to consider recommendations or regulations directing consumer water consumption.


If you look at AI from an enterprise perspective, you can see that not too distant future voice-enabled systems could allow employees using simple voice commands to instantly set up video conferences and document sharing, check the status of a pending deal or confirm who a colleague’s supervisor is — all things that currently require employees to individually open applications or systems.


Also, AI can greatly assist the monitoring and maintenance of oil rigs, which can have as many as thousands of sensors although engineers usually look at less than 1% of that available data. But AI can ingest a rig’s design documentation and years of maintenance logs and trouble tickets, so an engineer investigating a problem has better information to assess the problem. “When humans are getting better answers, we have found they then can ask more and better questions,” Abrams said.

However, several audience members questioned the risks or downsides inherent with AI — bias and worries that machines could eventually act independent of human control or design. Indeed, this bias was part of the main discussion, as the panel reminded everyone of how Microsoft’s first chatbot on Twitter, Tae, had to be shut down when the account started spouting racist statements following three days of machine learning gathered from the way other Twitter users engaged with the account.

Indeed, Pavandeep Kalra, Microsoft Director of Data Science, noted that the company learned a valuable lesson from the Tae experience: “We must carefully monitor artificial intelligence, as we cannot always anticipate what could go wrong.” Since AI is just machines learning from their experiences, they are necessarily dependent on the quality of data they receive, Kalra explained.

In terms of AI eventually taking actions independent of human decisions, however, the data scientists gathered didn’t appear too concerned. “I’ve never encountered a system yet that didn’t have a power source that I could disconnect,” Abrams said. “So, I’m not really concerned that my AI is going to kill me.”