Prof. Kimberly Houser
Assistant Professor at Oklahoma University
General Counsel Railyard, Inc and HCP Advisors, Inc
ABOUT LEGAL TECH
What is legal tech? What is the impact of technology in the classical way of law practice? Is it more from user experience?
Tech law is the application of current law to new technologies. In the U.S., most of the laws regarding privacy for instance were created well before household use of the internet and social media. As such, lawyers must be able to advise organizations on the limits and concerns with respect to their use of technology. Legal Tech, on the other hand, is the use of technology in the field of law. For example, during the discovery process in litigation, a significant number of documents might be overturned to the opposing attorney to review for relevance. Artificial intelligence can sort through the documents looking for keywords much more quickly than human eyes. This frees up a lawyer to focus on the analytical aspects of their practice.
You are a speaker in different conferences and countries. Did you see differences in adopting new ways of legal technology per different countries?
In terms of consumers, all regions, especially emerging markets are quick to embrace new technologies (such as smartphones, apple watches, air pods, etc.). The difference that I see is in the organizational adoption of technologies. In the U.S. because there are so few restrictions on the use of data by private industry, these tech companies are able to push the envelope in a way the those in the EU cannot. I would argue that even China has more restrictions on the private use data than in the U.S. in the forms of guidelines, regulations, decisions and standards. The difference is in restrictions on the government’s use of data. In the EU, public entities are for the most part held to the same standards as private organizations, in the U.S. there are federal regulations limiting what the government can do with data, but these regulations do not seem to be enforced, and in China, there do not appear to be the same limits on the government’s use of data as on private industry.
As an American lawyer, you were talking about the new data protection law in Europe called GDPR. Which impact has the data protection GDPR law within the AI technology framework, digitalization of processes and new systems?
That is a great question. I am currently working on a paper examining the differences in law and policy regarding the future of AI in the U.S., EU and China and how the many differences will either help or harm the growth of this field in these blocks.
Data is the fuel on which AI runs. Because the GDPR restricts the use of data and regulates profiling and behavioral marketing, it will limit how AI develops in the EU. Data regulations overall will have a significant impact. One of the areas that are overlooked is the interdependence between these blocks with respect to ancillary technologies. While China leads the way in the development of 5G and the installation of the small cells needed for 5G, they rely heavily on the U.S. for cloud storage and chips.
Because these blocks are trying to create distance between each other, they are potentially slowing down the roll out of some AI technologies as they will need to be developed independently. For example, self-driving cars will require technologies such as 5G to reduce latency to acceptable levels. It looks like China will be first in the completion of the installation, with the U.S. second and the EU third. In terms of the development of quantum computing technology, the tech industry in the U.S. currently leads the way. While the governments of China and the EU are investing money in AI, the U.S. is funded primarily through private industry.
ABOUT DECISION MAKING WITH AI vs ETHICS
You were writing a paper on how AI can solve the diversity problem in the tech industry when using an algorithm instead of human-decision making. How are you thinking AI can support to create more diverse teams and identify the right level of knowledge needed by the companies?
Human decision-makers are flawed. They have unconscious biases and make inconsistent decisions regularly. The problem is the high level of confidence that humans have in their own decisions and their lack of awareness on how subconscious factors impact them. In my paper, I detail the social science developed by Daniel Kahneman and Amos Tversky and later expanded upon by Daniel Kahneman to explain how and why we should not trust human decision-makers. Essentially, in the tech industry when you have large numbers of homogenous groups of white males responsible for decisions impacting employment, these decisions are suspect because of these biases.
Although the tech industry likes to call itself a meritocracy, it is anything but. Study after study demonstrates that when you remove information regarding gender and race, the hiring of women and underrepresented minorities increases. In addition, “a study reflecting data from the largest open source community (GitHub) with 12 million collaborators across 31 million software repositories showed that while women’s codes were rated more harshly than men’s, when gender was hidden, the women’s codes were found to be rated consistently better.” Kimberly A. Houser, Can AI solve the diversity problem in the tech industry? Mitigating noise and bias in employment decision-making, 22 STAN. TECH. L. REV (forthcoming 2019) at page 12.
The diversity problem in tech is not improving because they are focusing on solutions that simply do not work, such as training and mentoring programs. I suggest that the unconscious biases are behind this failure but can be mitigated through the use of objective criteria to make employment decisions. AI is this fix.
There have been very notable successes in using AI to improve the hiring promotion and retention of highly qualified candidates. What is most encouraging though, and the point of my paper is that these highly qualified candidates and employees come from a much more diverse pool than those currently employed in the tech industry.
How do you see the digitalization of the decision making with AI vs the decision-making process by a human being? Is there any ethic aspect to refer to when it’s only a machine which is making the decision?
In my paper, I make clear that I am not recommending that machine decision-making replace human decision-making, but humans should not be able to override the algorithmic outcomes without detailed written explanations. I also suggest that the tech industry is in the best positions to further develop and implement these technologies because of the data scientists they currently possess. The key is to increase the diversity of these data scientists in terms of gender, race and age AND to include legal scholars, social scientists, psychologists and ethicists on these teams.
How do you detect bias in the decision-making process? What is the psychology vs legal aspects? Did you take into account the differences in culture as well?
With respect to human bias, studies demonstrate over and over a preference for those like oneself. Additionally, social scientists have confirmed that humans are unaware of their own prejudices and are seemingly incapable of making unbiased, merit-based decisions. These unconscious biases include affinity bias, confirmation bias, and availability bias, to name a few. “The concept of cognitive biases” was first introduced by Daniel Kahneman and Amos Tversky, who explain that mental shortcuts result in errors in thinking.
Affinity bias occurs when we show a preference for people who are similar to us. This means that when decisions are made by a homogenous group, such as a committee of white male professors, there is a natural preference to hire those like themselves.
Confirmation bias occurs when a decision-maker only values information that supports their gut instinct. For example, an interviewer will only note characteristics which confirm their initial evaluation of their preferred candidate.
Availability bias comes into play when people find it easier to bring to mind information they have viewed recently. Take this famous riddle as an example: “A father and son are in a horrible car crash that kills the dad. The son is rushed to the hospital. Just as he’s about to go under the knife, the surgeon says, ‘I can’t operate—that boy is my son!’ Explain. A study at Boston University found that most participants could not answer because they were not able to easily envision the surgeon as the boy’s mother.” Kimberly A. Houser, What are the key challenges facing women in academia? SAGE OCEAN BLOG (Mar. 7, 2019).
Detecting bias in machine decision-making focuses on a number of points in the process, but for ease of explanation, I can divide it into input and output. Much of the bias in outcomes result from biased input – the data used. Because the internet, social media, and data from data brokers can reflect prejudice in society as well as errors, it is not appropriate for identifying traits when hiring a new employee. Known limited clean data sets can prevent some of this type of bias from creeping in. Another risk with data is the skewed data set. For example, Amazon had to stop using AI to choose the best candidates because the data set it used was from of all of the resumes submitted to it since 2014, the vast majority of which came from males. The results intentionally weeded out resumes from women.
However, this exact type of risk can be remedied by balancing the data and/or increasing the diversity of data points. With respect to potential discriminatory outcomes, there are open source technologies that are being used today that can audit for bias and test for fairness. Kimberly A. Houser, Can AI solve the diversity problem in the tech industry? Mitigating noise and bias in employment decision-making, 22 STAN. TECH. L. REV. (forthcoming 2019) at pages 35-38.
What is your story? How did you come to work on technology and AI being a lawyer?
I have been an attorney working in technology for many years (negotiating cell phone tower leases, rights of entry, software licensing agreements, web hosting agreements). But it was not until I starting working for an Austin start-up that I became heavily involved in tech law. I became interested in AI when I attended a conference at Facebook known as Soc Sci Foo in 2018 and was exposed to discussions on how machines making decisions would impact society.
While initially everything I read, when I got back from that conference and heard attending other conferences, was about the dangers of automated decisions, I started researching on my own the potential benefits to society that could result with data-based algorithmic decision-making. Algorithms use objective criteria, humans are unaware of how subjective their decisions are. While I am still very concerned about the U.S. government’s collection, use and processing of our data, I have come to see how machine-driven decision-making in private industry has an enormous advantage over human decision-making because of the ability to mitigate noise and unconscious biases.
How do your mornings look like?
I am firmly anti-alarm clock which is why I never agree to teach in the mornings. One thing I do that apparently is not common is that I write in my sleep. I keep a notebook by my bed and will half-wake up in the middle of the night and jot down ideas. Sometimes it is just a few words and sometimes I will have written down so many things that I actually write over earlier scribblings. I will roll out of bed and make a café con Leche with my Nespresso machine and journal for a while in order to process any dreams and review any night-time notes. Also, I have found that my brain answers a lot of questions in my sleep. After my café con Leche, I read and write every morning before doing my HIIT, showering and going into the office (unless I am on a research roll then I will stay at home).
What are you doing during your spare time and your weekends?
I love to watch football and play pool. I hang out with some professors, staff and business people from the community a few times a week and can usually get someone to play pool with me (although they all profess to prefer darts).
Books recommended by Prof. Kimberly Houser:
If you want to know more about her, please read her Golden Rules for Living, here.