[House Hearing, 116 Congress]
[From the U.S. Government Publishing Office]


                        ARTIFICIAL INTELLIGENCE:
                   SOCIETAL AND ETHICAL IMPLICATIONS

=======================================================================

                                HEARING

                               BEFORE THE

              COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY
                        HOUSE OF REPRESENTATIVES

                     ONE HUNDRED SIXTEENTH CONGRESS

                             FIRST SESSION

                               __________

                             JUNE 26, 2019

                               __________

                           Serial No. 116-32

                               __________

 Printed for the use of the Committee on Science, Space, and Technology


[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]


       Available via the World Wide Web: http://science.house.gov      
     
                             __________
                               

                    U.S. GOVERNMENT PUBLISHING OFFICE                    
36-796PDF                  WASHINGTON : 2019                     
          
--------------------------------------------------------------------------------------      
       
       

              COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY

             HON. EDDIE BERNICE JOHNSON, Texas, Chairwoman
ZOE LOFGREN, California              FRANK D. LUCAS, Oklahoma, 
DANIEL LIPINSKI, Illinois                Ranking Member
SUZANNE BONAMICI, Oregon             MO BROOKS, Alabama
AMI BERA, California,                BILL POSEY, Florida
    Vice Chair                       RANDY WEBER, Texas
CONOR LAMB, Pennsylvania             BRIAN BABIN, Texas
LIZZIE FLETCHER, Texas               ANDY BIGGS, Arizona
HALEY STEVENS, Michigan              ROGER MARSHALL, Kansas
KENDRA HORN, Oklahoma                RALPH NORMAN, South Carolina
MIKIE SHERRILL, New Jersey           MICHAEL CLOUD, Texas
BRAD SHERMAN, California             TROY BALDERSON, Ohio
STEVE COHEN, Tennessee               PETE OLSON, Texas
JERRY McNERNEY, California           ANTHONY GONZALEZ, Ohio
ED PERLMUTTER, Colorado              MICHAEL WALTZ, Florida
PAUL TONKO, New York                 JIM BAIRD, Indiana
BILL FOSTER, Illinois                JAIME HERRERA BEUTLER, Washington
DON BEYER, Virginia                  JENNIFFER GONZALEZ-COLON, Puerto 
CHARLIE CRIST, Florida                   Rico
SEAN CASTEN, Illinois                VACANCY
KATIE HILL, California
BEN McADAMS, Utah
JENNIFER WEXTON, Virginia
                         
                         
                         C  O  N  T  E  N  T  S

                             June 26, 2019

                                                                   Page
Hearing Charter..................................................     2

                           Opening Statements

Statement by Representative Eddie Bernice Johnson, Chairwoman, 
  Committee on Science, Space, and Technology, U.S. House of 
  Representatives................................................     8
    Written statement............................................     9

Statement by Representative Jim Baird, Committee on Science, 
  Space, and Technology, U.S. House of Representatives...........     9
    Written statement............................................    11

Written statement by Representative Frank Lucas, Ranking Member, 
  Committee on Science, Space, and Technology, U.S. House of 
  Representatives................................................    11

                               Witnesses:

Ms. Meredith Whittaker, Co-Founder, AI Now Institute, New York 
  University
    Oral Statement...............................................    13
    Written Statement............................................    16

Mr. Jack Clark, Policy Director, OpenAI
    Oral Statement...............................................    32
    Written Statement............................................    34

Mx. Joy Buolamwini, Founder, Algorithmic Justice League
    Oral Statement...............................................    45
    Written Statement............................................    47

Dr. Georgia Tourassi, Director, Oak Ridge National Lab-Health 
  Data Sciences Institute
    Oral Statement...............................................    74
    Written Statement............................................    76

Discussion.......................................................    92

             Appendix I: Answers to Post-Hearing Questions

Ms. Meredith Whittaker, Co-Founder, AI Now Institute, New York 
  University.....................................................   120

Mr. Jack Clark, Policy Director, OpenAI..........................   123

Mx. Joy Buolamwini, Founder, Algorithmic Justice League..........   128

Dr. Georgia Tourassi, Director, Oak Ridge National Lab-Health 
  Data Sciences Institute........................................   135

            Appendix II: Additional Material for the Record

H. Res. 153 submitted by Representative Haley Stevens, 
  Chairwoman, Subcommittee on Research and Technology, Committee 
  on Science, Space, and Technology, U.S. House of 
  Representatives................................................   140

 
                        ARTIFICIAL INTELLIGENCE:.
                   SOCIETAL AND ETHICAL IMPLICATIONS

                              ----------                              


                        WEDNESDAY, JUNE 26, 2019

                  House of Representatives,
               Committee on Science, Space, and Technology,
                                                   Washington, D.C.


    The Committee met, pursuant to notice, at 10 a.m., in room 
2318 of the Rayburn House Office Building, Hon. Eddie Bernice 
Johnson [Chairwoman of the Committee] presiding.
[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]

    Chairwoman Johnson. The hearing will come to order. Without 
objection, the Chair is authorized to declare recess at any 
time.
    Good morning, and welcome to our distinguished panel of 
witnesses. We are here today to learn about the societal 
impacts and ethical implications of a technology that is 
rapidly changing our lives, namely, artificial intelligence. 
From friendly robot companions to hostile terminators, 
artificial intelligence (AI) has appeared in films and sparked 
our imagination for many decades.
    Today, it is no longer a futuristic idea, at least not 
artificial intelligence designed for a specific task. Recent 
advances in computing power and increases in data production 
and collection have enabled artificial-intelligence-driven 
technology to be used in a growing number of sectors and 
applications, including in ways we may not realize. It is 
routinely used to personalize advertisements when we browse the 
internet. It is also being used to determine who gets hired for 
a job or what kinds of student essays deserve a higher score.
    The artificial intelligence systems can be a powerful tool 
for good, but they also carry risk. The systems have been shown 
to exhibit gender discrimination when displaying job ads, 
racial discrimination in predictive policing, and socioeconomic 
discrimination when selecting zip codes to offer commercial 
products or services.
    The systems do not have an agenda, but the humans behind 
the algorithms can unwittingly introduce their personal biases 
and perspectives into the design and use of artificial 
intelligence. The algorithms are then trained with data that is 
biased in ways both known and unknown. In addition to resulting 
in discriminatory decisionmaking, biases in design and training 
of algorithms can also cause artificial intelligence to fail in 
other ways, for example, performing worse than clinicians in 
medical diagnostics. We know that these risks exist. What we do 
not fully understand is how to mitigate them.
    We are also struggling with how to protect society against 
intended misuse and abuse. There has been a proliferation of 
general artificial intelligence ethics principles by companies 
and nations alike. The United States recently endorsed an 
international set of principles for the responsible 
development. However, the hard work is in the translation of 
these principles into concrete, effective action. Ethics must 
be integrated into the earliest stages of the artificial 
intelligence research and education, and continue to be 
prioritized at every stage of design and deployment.
    Federal agencies have been investing in this technology for 
years. The White House recently issued an executive order on 
Maintaining American Leadership in artificial intelligence and 
updated the 2016 National Artificial Intelligence R&D Strategic 
Plan. These are important steps. However, I also have concerns. 
First, to actually achieve leadership, we need to be willing to 
invest. Second, while few individual agencies are making ethics 
a priority, the Administration's executive order and strategic 
plan fall short in that regard. When mentioning it at all, they 
approach ethics as an add-on rather than an integral component 
of all artificial intelligence R&D (research and development).
    From improving healthcare, transportation, and education, 
to helping to solve poverty and improving climate resilience, 
artificial intelligence has vast potential to advance the 
public good. However, this is a technology that will transcend 
national boundaries, and if the U.S. does not address the 
ethics seriously and thoughtfully, we will lose the opportunity 
to become a leader in setting the international norms and 
standards in the coming decades. Leadership is not just about 
advancing the technology; it is about advancing it responsibly.
    I look forward to hearing the insights and recommendation 
from today's expert panel on how the United States can lead in 
the ethical development of artificial intelligence.
    [The prepared statement of Chairwoman Johnson follows:]

    Good morning, and welcome to our distinguished panel of 
witnesses.
    We are here today to learn about the societal impacts and 
ethical implications of a technology that is rapidly changing 
our lives, namely, Artificial intelligence.
    From friendly robot companions to hostile terminators, AI 
has appeared in films and sparked our imagination for many 
decades. Today, AI is no longer a futuristic idea, at least not 
AI designed for specific tasks. Recent advances in computing 
power and increases in data production and collection have 
enabled AI-driven technology to be used in a growing number of 
sectors and applications, including in ways we may not realize. 
AI is routinely used to personalize advertisements when we 
browse the internet. It is also being used to determine who 
gets hired for a job or what kinds of student essays deserve a 
higher score.
    AI systems can be a powerful tool for good, but they also 
carry risks. AI systems have been shown to exhibit gender 
discrimination when displaying job ads, racial discrimination 
in predictive policing, and socioeconomic discrimination when 
selecting which zip codes to offer commercial products or 
services.
    The AI systems do not have an agenda, but the humans behind 
the algorithms can unwittingly introduce their personal biases 
and perspectives into the design and use of AI. The algorithms 
are then trained with data that is biased in ways both known 
and unknown. In addition to resulting in discriminatory 
decision-making, biases in the design and training of 
algorithms can also cause AI to fail in other ways, for example 
performing worse than clinicians in medical diagnostics.
    We know that these risks exist. What we do not fully 
understand is how to mitigate them. We are also struggling with 
how to protect society against intended misuse and abuse of AI. 
There has been a proliferation of general AI ethics principles 
by companies and nations alike. The United States recently 
endorsed an international set of principles for the responsible 
development of AI. However, the hard work is in the translation 
of these principles into concrete, effective action. Ethics 
must be integrated at the earliest stages of AI research and 
education, and continue to be prioritized at every stage of 
design and deployment.
    Federal agencies have been investing in AI technology for 
years. The White House recently issued an executive order on 
Maintaining American Leadership in AI and updated the 2016 
National Artificial Intelligence R&D Strategic Plan. These are 
important steps. However, I also have concerns. First, to 
actually achieve leadership, we need to be willing to invest. 
Second, while a few individual agencies are making ethics a 
priority, the Administration's executive order and strategic 
plan fall short in that regard. When mentioning it at all, they 
approach ethics as an add-on rather than an integral component 
of all AI R&D.
    From improving healthcare, transportation, and education, 
to helping to solve poverty and improving climate resilience, 
AI has vast potential to advance the public good. However, this 
is a technology that will transcend national boundaries, and if 
the U.S. does not address AI ethics seriously and thoughtfully, 
we will lose the opportunity to become a leader in setting the 
international norms and standards for AI in the coming decades. 
Leadership is not just about advancing the technology, it's 
about advancing it responsibly.
    I look forward to hearing the insights and recommendations 
from today's expert panel on how the United States can lead in 
the ethical development of AI.

    Chairwoman Johnson. I now recognize Mr. Baird for his 
opening statement.
    Mr. Baird. Thank you, Chairwoman Johnson, for holding this 
hearing today on the societal and ethical implications of 
artificial intelligence, AI.
    In the first half of the 20th century, the concept of 
artificial intelligence was the stuff of science fiction. 
Today, it's a reality. Since the term AI was first coined in 
the 1950s, we have made huge advances in the field of 
artificial narrow intelligence. Narrow AI systems can perform a 
single task like providing directions through Siri or giving 
you weather forecasts. This technology now touches every part 
of our lives and every sector of the economy.
    Driving the growth of AI is the availability of big data. 
Private companies and government have collected large datasets, 
which, combined with advanced computing power, provide the raw 
material for dramatically improved machine-learning approaches 
and algorithms. How this data is collected, used, stored, 
secured is at the heart of the ethical and policy debate over 
the use of AI.
    AI has already delivered significant benefits for U.S. 
economic prosperity and national security, but it has also 
demonstrated a number of vulnerabilities, including the 
potential to reinforce existing social issues and economic 
imbalances.
    As we continue to lead the world in advanced computing 
research, a thorough examination of potential bias, ethics, and 
reliability challenges of AI is critical to maintaining our 
leadership in technology. The United States must remain the 
leader in AI, or we risk letting other countries who don't 
share our values drive the standards for this technology. To 
remain the leader in AI, I also believe Americans must 
understand and trust how AI technologies will use their data.
    The Trump Administration announced earlier this year an 
executive order on ``Maintaining American Leadership in 
Artificial Intelligence.'' Last week, the Administration's 
Select Committee on AI released a report that identifies its 
priorities for federally funded AI research. I'm glad that the 
Administration is making AI research a priority. This is an 
effort that is going to require cooperation between industry, 
academia, and Federal agencies. In government, these efforts 
will be led by agencies under the jurisdiction of this 
Committee, including NIST (National Institute of Standards and 
Technology), NSF (National Science Foundation), and DOE 
(Department of Energy).
    We will learn more about one of those research efforts from 
one of our witnesses today, Dr. Georgia Tourassi, the Founding 
Director of the Health Data Sciences Institute at Oak Ridge 
National Laboratory. Dr. Tourassi's research focuses on 
deploying AI to provide diagnoses and treatment for cancer. Her 
project is a good example of how cross-agency collaboration and 
government data can responsibly drive innovation for public 
good. I look forward to hearing more about her research.
    Over the next few months, this Committee will be working 
toward bipartisan legislation to support a national strategy on 
artificial intelligence. The challenges we must address are how 
industry, academia, and the government can best work together 
on AI challenges, including ethical and societal questions, and 
what role the Federal Government should play in supporting 
industry as it drives innovation.
    I want to thank our accomplished panel of witnesses and 
their testimony today, and I look forward to hearing what role 
Congress should play in facilitating this conversation.
    [The prepared statement of Mr. Baird follows:]

    Chairwoman Johnson, thank you for holding today's hearing 
on the societal and ethical implications of artificial 
intelligence (AI).
    In the first half of the 20th century, the concept of 
artificial intelligence was the stuff of science fiction. Today 
it is reality.
    Since the term AI was first coined in the 1950s, we have 
made huge advances in the field of artificial narrow 
intelligence.
    Narrow AI systems can perform a single task like providing 
directions through Siri or giving you weather forecasts. This 
technology now touches every part of our lives and every sector 
of the economy.
    Driving the growth of AI is the availability of big data. 
Private companies and government have collected large data 
sets, which, combined with advanced computing power, provide 
the raw material for dramatically improved machine learning 
approaches and algorithms.
    How this data is collected, used, stored, and secured is at 
the heart of the ethical and policy debate over the use of AI.
    AI has already delivered significant benefits for U.S. 
economic prosperity and national security.
    But it has also demonstrated a number of vulnerabilities, 
including the potential to reinforce existing social issues and 
economic imbalances.
    As we continue to lead the world in advanced computing 
research, a thorough examination of potential bias, ethics, and 
reliability challenges of AI is critical to maintaining our 
leadership in this technology.
    The United States must remain the leader in AI, or we risk 
letting other countries who don't share our values drive the 
standards for this technology.
    To remain the leader AI, I believe Americans must also 
understand and trust how AI technologies will use their data.
    The Trump Administration announced earlier this year an 
Executive Order on "Maintaining American Leadership in 
Artificial Intelligence."
    Last week the Administration's Select Committee on AI 
released a report that identifies its priorities for federally 
funded AI research.
    I am glad that the Administration is making AI research a 
priority.
    This is an effort that is going to require cooperation 
between industry, academia and federal agencies.
    In government, these efforts will be led by agencies under 
the jurisdiction of this Committee, including NIST, NSF and 
DOE.
    We will learn more about one of those research efforts from 
one of our witnesses today, Dr. Georgia Tourassi, the founding 
Director of the Health Data Sciences Institute (HDSI) at Oak 
Ridge National Laboratory. Dr. Tourassi's research focuses on 
deploying AI to provide diagnoses and treatment of cancer.
    Her project is a good example of how cross-agency 
collaboration and government data can responsibly drive 
innovation for public good. I look forward to hearing more 
about her research.
    Over the next few months, this Committee will be working 
towards bipartisan legislation to support a national strategy 
on Artificial Intelligence.
    The challenges we must address are how industry, academia, 
and the government can best work together on AI challenges, 
including ethical and societal questions, and what role the 
federal government should play in supporting industry as it 
drives innovation.
    I want to thank our accomplished panel of witnesses for 
their testimony today and I look forward to hearing what role 
Congress should play in facilitating this conversation.

    Chairwoman Johnson. Thank you very much.
    If there are Members who wish to submit additional opening 
statements, your statements will be added to the record at this 
point.
    [The prepared statement of Mr. Lucas follows:]

    Today, we will explore the various applications and 
societal implications of Artificial Intelligence (AI), a 
complex field of study where researchers train computers to 
learn directly from information without being explicitly 
programmed - like humans do.
    Last Congress, this Committee held two hearings on this 
topic - examining the concept of Artificial General 
Intelligence (AGI) and discussing potential applications for AI 
development through scientific machine learning, as well as the 
cutting-edge basic research it can enable.
    This morning we will review the types of AI technologies 
being implemented all across the country and consider the most 
appropriate way to develop fair and responsible guidelines for 
their use.
    From filtering your inbox for spam to protecting your 
credit card from fraudulent activity, AI technologies are 
already a part of our everyday lives. AI is integrated into 
every major U.S. economic sector, including transportation, 
health care, agriculture, finance, national defense, and space 
exploration.
    This influence will only expand. In 2016, the global AI 
market was valued at over $4 billion and is expected to grow to 
$169 billion by 2025. Additionally, there are estimates that AI 
could add $15.7 trillion to global GDP by 2030.
    Earlier this year, the Trump Administration announced a 
plan for "Maintaining American Leadership in Artificial 
Intelligence."
    Last week, the Administration's Select Committee on 
Artificial Intelligence released a report that identifies its 
priorities for federally funded AI research. These include 
developing effective methods for human-AI collaboration, 
understanding and addressing the ethical, legal, and societal 
implications of AI, ensuring the safety and security of AI 
systems, and evaluating AI technologies through standards and 
benchmarks.
    Incorporating these priorities while driving innovation in 
AI will require cooperation between industry, academia, and the 
Federal government. These efforts will be led by agencies under 
the jurisdiction of this Committee: the National Institute on 
Standards and Technology (NIST), the National Science 
Foundation (NSF), and the Department of Energy (DOE).
    The AI Initiative specifically directs NIST to develop a 
federal plan for the development of technical standards in 
support of reliable, robust, and trustworthy AI technologies. 
NIST plans to support the development of these standards by 
building research infrastructure for AI data and standards 
development and expanding ongoing research and measurement 
science efforts to promote adoption of AI in the marketplace.
    At the NSF, federal investments in AI span fundamental 
research in machine learning, along with the security, 
robustness, and explainability of AI systems. NSF also plays an 
essential role in supporting diverse STEM education, which will 
provide a foundation for the next generation AI workforce. NSF 
also partners with U.S. industry coalitions to emphasize 
fairness in AI, including a program on AI and Society which is 
jointly supported by the Partnership on AI to Benefit People 
and Society (PAI).
    Finally, with its world-leading user facilities and 
expertise in big data science, advanced algorithms, and high-
performance computing, DOE is uniquely equipped to fund robust 
fundamental research in AI.
    Dr. Georgia Tourassi, the founding Director of the Health 
Data Sciences Institute (HDSI), joins us today from Oak Ridge 
National Laboratory (ORNL) - a DOE Office of Science 
Laboratory. Dr. Tourassi's research focuses on deploying AI to 
provide diagnoses and treatment for cancer.
    The future of scientific discovery includes the 
incorporation of advanced data analysis techniques like AI. 
With the next generation of supercomputers, including the 
exascale computing systems that DOE is expected to field by 
2021, American researchers will be able to explore even bigger 
challenges using AI. They will have greater power, and even 
more responsibility.
    Technology experts and policymakers alike have argued that 
without a broad national strategy for advancing AI, the U.S. 
will lose its narrow global advantage. With increasing 
international competition in AI and the immense potential for 
these technologies to drive future technological development, 
it's clear the time is right for the federal government to lead 
these conversations about AI standards and guidelines.
    I look forward to working with Chairwoman Johnson and the 
members of the Committee over the next few months to develop 
legislation that supports this national effort.
    I want to thank our accomplished panel of witnesses for 
their testimony today and I look forward to receiving their 
input.

    Chairwoman Johnson. At this time, I will introduce our 
witnesses. Our first witness is Ms. Meredith Whittaker. Ms. 
Whittaker is a distinguished research scientist at New York 
University and Co-Founder and Co-Director of the AI Now 
Institute, which is dedicated to researching the social 
implications of artificial intelligence and related 
technologies. She has over a decade of experience working in 
the industry, leading product and engineering teams.
    Our next witness is Mr. Jack Clark. Mr. Clark is the Policy 
Director of OpenAI where his work focuses on AI policy and 
strategy. He's also a Research Fellow at the Center for 
Security and Emerging Technology at Georgetown University and a 
member of the Center of the New American Security task force at 
AI National Security. Mr. Clark also helps run the AI Index, an 
initiative from the Stanford One Hundred Year Study on AI to 
track AI progress.
    After Mr. Clark is Mx. Joy Buolamwini, who is Founder of 
the Algorithmic Justice League and serves on the Global Tech 
Panel convened by the Vice President of the European Union to 
advise leaders and technology executives on ways to reduce the 
potential harms of AI. She is also a graduate researcher at MIT 
where her research focuses on algorithmic bias and computer 
version systems.
    Our last witness, Dr. Georgia Tourassi. Dr. Tourassi is the 
Founding Director of the Health and Data Sciences Institute and 
Group Leader of Biomedical Sciences, Engineering, and Computing 
at the Oak Ridge National Laboratory. Her research focuses on 
artificial intelligence for biomedical applications and data-
driven biomedical discovery. Dr. Tourassi also serves on the 
FDA (Food and Drug Administration) Advisory Committee and 
Review Panel on Computer-aided Diagnosis Devices.
    Our witnesses should know that you will have 5 minutes for 
your spoken testimony. Your written testimony will be included 
in the record for the hearing. When you all have completed your 
spoken testimony, we will begin with a round of questions. Each 
Member will have 5 minutes to question the panel.
    We now will start with Ms. Whittaker.

                TESTIMONY OF MEREDITH WHITTAKER,

                  CO-FOUNDER, AI NOW INSTITUTE,

                      NEW YORK UNIVERSITY

    Ms. Whittaker. Chairwoman Johnson, Ranking Member Baird, 
and Members of the Committee, thank you for inviting me to 
speak today. My name is Meredith Whittaker, and I'm the Co-
Founder of the AI Now Institute at New York University. We're 
the first university research institute dedicated to studying 
the social implications of artificial intelligence and 
algorithmic technologies.
    The role of AI in our core social institutions is 
expanding. AI is shaping access to resources and opportunity 
both in government and in the private sector with profound 
implications for hundreds of millions of Americans. These 
systems are being used to judge who should be released on bail; 
to automate disease diagnosis; to hire, monitor, and manage 
workers; and to persistently track and surveil using facial 
recognition. These are a few examples among hundreds. In short, 
AI is quietly gaining power over our lives and institutions, 
and at the same time AI systems are slipping farther away from 
core democratic protections like due process and a right 
refusal.
    In light of this, it is urgent that Congress act to ensure 
AI is accountable, fair, and just because this is not what is 
happening right now. We at AI Now, along with many other 
researchers, have documented the ways in which AI systems 
encode bias, produce harm, and differ dramatically from many of 
the marketing claims made by AI companies.
    Voice-recognition hears masculine sounding voices better 
than feminine voices. Facial recognition fails to see black 
faces and transgendered faces. Automated hiring systems 
discriminate against women candidates. Medical diagnostic 
systems don't work for dark-skinned patients. And the list goes 
on, revealing a persistent pattern of gender and race-based 
discrimination, among other forms of identity.
    But even when these systems do work as intended, they can 
still cause harm. The application of 100 percent accurate AI to 
monitor, track, and control vulnerable populations raises 
fundamental issues of power, surveillance, and basic freedoms 
in our democratic society. This reminds us that questions of 
justice will not be solved simply by adjusting a technical 
system.
    Now, when regulators, researchers, and the public seek to 
understand and remedy potential harms, they're faced with 
structural barriers. This is because the AI industry is 
profoundly concentrated, controlled by just a handful of 
private tech companies who rely on corporate secrecy laws that 
make independent testing and auditing nearly impossible.
    This also means that much of what we do know about AI is 
written by the marketing departments of these same companies. 
They highlight hypothetical benevolent uses and remain silent 
about the application of AI to fossil fuel extraction, weapons 
development, mass surveillance, and the problems of bias and 
error. Information about the darker side of AI comes largely 
thanks to researchers, investigative journalists, and 
whistleblowers.
    These companies are also notoriously non-diverse. AI Now 
conducted a year-long study of diversity in the AI industry, 
and the results are bleak. To give an example of how bad it is, 
in 2018 the share of women in computer science professions 
dropped below 1960 levels. And this means that women, people of 
color, gender minorities, and others are excluded from shaping 
how AI systems function, and this contributes to bias.
    Now, while the costs of such bias are borne by historically 
marginalized people, the benefits of such systems, from profits 
to efficiency, accrue primarily to those already in positions 
of power. This points to problems that go well beyond the 
technical. We must ask who benefits from AI, who is harmed, and 
who gets to decide? This is a fundamental question of 
democracy.
    Now, in the face of mounting criticism, tech companies are 
adopting ethical principles. These are a positive start, but 
they don't substitute for meaningful public accountability. 
Indeed, we've seen a lot of P.R., but we have no examples were 
such ethical promises are backed by public enforcement.
    Congress has a window to act, and the time is now. Powerful 
AI systems are reshaping our social institution in way--
institutions in ways we're unable to measure and contest. These 
systems are developed by a handful of private companies whose 
market interests don't always align with the public good and 
who shield themselves from accountability behind claims of 
corporate secrecy. When we are able to examine these systems, 
too often we find that they are biased in ways that replicate 
historical patterns of discrimination. It is imperative that 
lawmakers regulate to ensure that these systems are 
accountable, accurate, contestable, and that those most at risk 
of harm have a say in how and whether they are used.
    So in pursuit of this goal, AI Now recommends that 
lawmakers, first, require algorithmic impact assessments in 
both public and private sectors before AI systems are acquired 
and used; second, require technology companies to waive trade 
secrecy and other legal claims that hinder oversight and 
accountability mechanisms; third, require public disclosure of 
AI systems involved in any decisions about consumers; and 
fourth, enhance whistleblower protections and protections for 
conscientious objectors within technology companies.
    Thank you, and I welcome your questions.
    [The prepared statement of Ms. Whittaker follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairwoman Johnson. Thank you. Mr. Jack Clark.

                    TESTIMONY OF JACK CLARK,

                    POLICY DIRECTOR, OPENAI

    Mr. Clark. Chairwoman Johnson, Ranking Member Baird, and 
Committee Members, thank you for inviting me today. I'm the 
Policy Director for OpenAI, a technical research lab based in 
San Francisco.
    I think the reason why we're here is that AI systems have 
become--and I'm using air quotes--good enough to be deployed 
widely in society, but lots of the problems that we're going to 
be talking about are because of ``good enough'' AI. We should 
ask, ``good enough for who?'', and we should also ask ``good 
enough at what?''
    So to give you some context, recent advances in AI have let 
us write software that can interpret the contents of an image, 
understand wave forms in audio, or classify movements in video, 
and more. At the same time, we're seeing the resources applied 
to AI development grow significantly. One analysis performed by 
OpenAI found that the amount of computing power used to train 
certain AI systems had increased by more than 300,000 times in 
the last 6 years, correlating to significant economic 
investments on the part of primarily industry in developing 
these systems.
    But though these systems have become better at doing the 
tasks we set for them, they display problems in deployment. And 
these problems are typically a consequence of people failing to 
give the systems the right objectives or giving them the right 
training data. Some of these problems include popular image 
recognition systems that have been shown to accurately classify 
products from rich countries and fail to classify products from 
poor countries, voice recognition systems that perform 
extremely badly when dealing with people who are speaking in 
English that is heavily accented, or commercially available 
facial recognition systems that consistently misclassify or 
fail to classify people with darker skin tones.
    So why these issues arise is because many modern machine 
learning systems automate tasks that require people to make 
value judgments. And so when people make value judgments, they 
encode their values into the system, whether that's the value 
of who's got to be in the dataset or what the task is that it's 
solving. And because, as my co-panelists have mentioned, these 
people are not from a particularly diverse background, you can 
also expect problems to come from these people selecting values 
that apply to many people.
    These systems can also fail as a consequence of technical 
issues, so image classification systems can be tricked using 
things known as adversarial examples to consistently 
misclassify things they see in an image. More confusingly and 
worryingly, we found that you can break these systems simply by 
putting something in an image that they don't expect to see. 
And one memorable study did this by placing an elephant in a 
room, which would cause the image recognition system to 
misclassify other things in that room even though it wasn't 
being asked to look at it. So that gives you a sense of how 
brittle these systems can be if they're applied in the context 
which they don't expect.
    I think, though, that these technical issues are in a sense 
going to be easier to deal with than the social issues. The 
questions of how these systems are deployed, who is deploying 
them, and who they're being deployed to help or surveil are the 
questions that I think we should focus on here. And to that end 
I have a few suggestions for things that I think government, 
industry, and academia can do to increase the safety of these 
systems.
    First, I think we need additional transparency. And what I 
mean by transparency is government should convene academia and 
industry to create better tools and tests and assessment 
schemes such as the, you know, algorithmic impact assessment or 
work like adding a label to datasets which are widely used so 
that people know what they're using and have tools to evaluate 
their performance.
    Second, government should invest in its own measurement 
assessments and benchmarking schemes potentially by agencies 
such as NIST. The reason we should do this is that, as we 
develop these systems for assessing things like bias, we would 
probably want to roll them into the civil sector and have a 
government agency perform regular testing in partnership with 
academia to give the American people a sense of what these 
systems are good at, what they're bad at, and, most crucially, 
who they're failing.
    Finally, I think government should increase funding for 
interdisciplinary research, a common problem is these systems 
are developed by a small number of people from homogenous 
backgrounds, and they can also be studied in this way because 
grants are not particularly friendly to large-scale 
interdisciplinary research projects. So we should think about 
ways we can study AI that brings together computer scientists, 
lawyers, social scientists, philosophers, security experts, and 
more, not just 20 computer science professionals and a single 
lawyer, which is some people's definition of interdisciplinary 
research.
    So, in conclusion, I think we have a huge amount of work to 
do, but I think that there's real work that can be done today 
that can let us develop better systems for oversight and 
awareness of this technology. Thank you very much.
    [The prepared statement of Mr. Clark follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairwoman Johnson. Thank you very much. Mx. Joy 
Buolamwini.

                  TESTIMONY OF JOY BUOLAMWINI,

              FOUNDER, ALGORITHMIC JUSTICE LEAGUE

    Mx. Buolamwini. Thank you, Chairwoman Johnson, Ranking 
Member Baird, and fellow Committee Members, for the opportunity 
to testify. I'm an algorithmic bias researcher based at MIT. 
I've conducted studies showing some of the largest recorded 
racial skin type and gender biases in systems sold by IBM, 
Microsoft, and Amazon. This research exposes limitations of AI 
systems that are infiltrating our lives, determining who gets 
hired or fired, and even who's targeted by the police.
    Research continues to remind us that sexism, racism, 
ableism, and other intersecting forms of discrimination can be 
amplified by AI. Harms can arise unintended. The interest in 
self-driving cars is in part motivated by the promise they will 
reduce the more than 35,000 annual vehicle fatalities. A June 
2019 study showed that for the task of pedestrian tracking, 
children were less likely to be detected than adults. This 
finding motivates concerns that children could be at higher 
risk for being hit by self-driving cars. When AI-enabled 
technologies are presented as lifesavers, we must ask which 
lives will matter.
    In healthcare, researchers are exploring how to apply AI-
enabled facial analysis systems to detect pain and monitor 
disease. An investigation of algorithmic bias for clinical 
populations showed these AI systems demonstrated poor 
performance on older adults with dementia. Age and ability 
should not impede quality of medical treatment, but without 
care, AI and health can worsen patient outcomes.
    Behavior-based discrimination can also occur, as we see 
with the use of AI to analyze social media content. The U.S. 
Government is monitoring social media activities to inform 
immigration decisions despite a Brennan Center report and a 
USCIS (U.S. Citizenship and Immigration Services) study 
detailing how such methods are largely ineffective for 
determining threats to public safety or national security. 
Immigrants and people in low-income families are especially at 
risk for having to expose their most sensitive information, as 
is in the case when AI systems are used to determine access to 
government services.
    Broadly speaking, AI harms can be traced first to 
privileged ignorance. The majority of researchers, 
practitioners, and educators in the field are shielded from the 
harms of AI, leading to undervaluation, de-prioritization, and 
ignorance of problems, along with decontextualized solutions.
    Second, negligent industry and academic norms, there's an 
ongoing lack of transparency and nuanced evaluations of the 
limitations of AI.
    And third, and overreliance on biased data that reflects 
structural inequalities coupled with a belief in techno-
solutionism. For example, studies of automated risk assessment 
tools used in the criminal justice system show continued racial 
bias in the penal system, which cannot be remedied with 
technical fixes.
    We must do better. At the very least, government-funded 
research on human-centered AI should require the documentation 
of both included and excluded demographic groups.
    Finally, I urge Congress to ensure funding without conflict 
of interest is available for AI research in the public 
interest. After co-authoring a peer-reviewed paper testing 
gender and skin type bias in an Amazon product which revealed 
error rates of 0 percent for white men and 31 percent for women 
of color, I faced corporate hostility as a company Vice 
President made false statements attempting to discredit my MIT 
research. AI research that exposes harms which challenge 
business interests need to be supported and protected.
    In addition to addressing the Computer Fraud and Abuse Act, 
which criminalizes certain forms of algorithmic biased 
research, Congress can issue an AI accountability tax. A 
revenue tax of just .5 percent on Google, Microsoft, Amazon, 
Facebook, IBM, and Apple would provide more than $4 billion of 
funding for AI research in the public interest and support 
people who are impacted by AI harms.
    Public opposition is already mounting against harmful use 
of AI, as we see with the recent face recognition ban in San 
Francisco and a proposal for a Massachusetts Statewide 
moratorium. Moving forward, we must make sure that the future 
of AI development, research, and education in the United States 
is truly of the people, by the people, and for all the people, 
not just the powerful and privileged. Thank you.
    Next, I look forward to answering your questions.
    [The prepared statement of Mx. Buolamwini follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairwoman Johnson. Thank you very much.
    Dr. Georgia Tourassi.

               TESTIMONY OF DR. GEORGIA TOURASSI,

           DIRECTOR, HEALTH DATA SCIENCES INSTITUTE,

                 OAK RIDGE NATIONAL LABORATORY

    Dr. Tourassi. Chairwoman Johnson, Ranking Member Baird, and 
distinguished Members of the Committee, thank you for the 
opportunity to appear before you today. My name is Georgia 
Tourassi. I'm a Distinguished Scientist in the Computing and 
Computational Sciences Directorate and the Director of the 
Health Data Sciences Institute of the U.S. Department's Oak 
Ridge National Laboratory in Oak Ridge, Tennessee. It is an 
honor to provide this testimony on the role of the Department 
of Energy and its national laboratories in spearheading 
responsible use of Federal data assets for AI innovation in 
healthcare.
    The dramatic growth of AI is driven by big data, massive 
compute power, and novel algorithms. The Oak Ridge National Lab 
is equipped with exceptional resources in all three areas. 
Through the Department of Energy's Strategic Partnership 
Projects program, we are applying these resources to challenges 
in healthcare.
    Data scientists at Oak Ridge have developed AI solutions to 
modernize the National Cancer Institute's surveillance program. 
These solutions are being implemented across several cancer 
registries where they are demonstrating high accuracy and 
improved efficiency, making near real-time cancer incidents 
reporting a reality.
    In partnership with the Veterans Administration, the Oak 
Ridge National Lab has brought its global leadership in 
computing and big data to the task of hosting and analyzing the 
VA's vast array of healthcare and genomic data. This 
partnership brings together VA's data assets with DOE's world-
class high-performance computing assets and scientific 
workforce to enable AI innovation and improve the health of our 
veterans. These are examples that demonstrate what can be 
achieved through a federally coordinated AI strategy.
    But with the great promise of AI comes an even greater 
responsibility. There are many ethical questions when applying 
AI in medicine. I will focus on questions related to the ethics 
of data and the ethics of AI development and deployment.
    With respect to the ethics of data, the massive volumes of 
health data must be carefully protected to preserve privacy 
even as we extract valuable insights. We need secure digital 
infrastructure that is sustainable and energy-efficient to 
accommodate the ever-growing datasets and computational AI 
needs. We also need to address the sensitive issues about data 
ownership and data use as the line between research use and 
commercial use is blurry.
    With respect to the ethics of AI development and 
deployment, we know that AI algorithms are not immune to low-
quality data or biased data. The DOE national laboratories, 
working with other Federal agencies, could provide the secure 
and capable computing environment for objective benchmarking 
and quality control of sensitive datasets and AI algorithms 
against community consensus metrics.
    Because one size will not fit all, we need a federally 
coordinated conversation involving not only the STEM (science, 
technology, engineering, and mathematics) sciences but also 
social sciences, economics, law, public policy stakeholders to 
address the emerging domain-specific complexities of AI use.
    Last, we must build an inclusive and diverse AI workforce 
to deliver solutions that are beneficial to all. The Human 
Genome Project included a program on the ethical, legal, and 
social implications of genomic research that had a lasting 
impact on how the entire community from basic researchers to 
drug companies to medical workers used and handled genomic 
data. The program could be a model for a similar effort to 
realize the hope of AI in transforming health care.
    The DOE national laboratories are uniquely equipped to 
support a national strategy in AI research, development, 
education, and stakeholder coordination that addresses the 
security, societal, and ethical challenges of AI in health 
care, particularly with respect to the Federal data assets.
    Thank you again for the opportunity to testify. I welcome 
your questions on this important topic.
    [The prepared statement of Dr. Tourassi follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairwoman Johnson. Thank you very much. At this point, we 
will begin our first round of questions, and the Chair 
recognizes herself for 5 minutes.
    My questions will be to all witnesses. This Committee has 
led congressional discussions and action on quantum science, 
engineering, biology, and many other emerging technologies over 
the years. In thinking about societal implications and 
governance, how is AI similar to, or different from, other 
transformational technologies, and how should we be thinking 
about it differently? We'll start with you, Ms. Whittaker.
    Ms. Whittaker. Thank you, Chairwoman. I think there are 
many similarities and differences. In the case of AI, as I 
mentioned in my opening statement and in my written testimony, 
what you see is a profoundly corporate set of technologies. 
These are technologies that, because of the requirement to have 
massive amounts of computational infrastructure and massive 
amounts of data, aren't available for anyone with an interest 
to develop or deploy.
    When we talk about AI, we're generally talking about 
systems that are deployed by the private sector in ways that 
are calibrated ultimately to maximize revenue and profit. So we 
need to look carefully at the interests that are driving the 
production and deployment of AI, and put in place regulations 
and checks to ensure that those interests don't override the 
public good.
    Chairwoman Johnson. Mr. Clark.
    Mr. Clark. It's similar in the sense that it's a big deal 
in the way that 5G or quantum computers are going to 
revolutionize chunks of the economy. Maybe the difference is 
that it's progressing much more rapidly than this technology 
and it's also being deployed at scale much more rapidly. And I 
think that the different nature of the pace and scale of 
deployment means that we need additional attention here 
relative to the other technologies that you've been discussing.
    Mx. Buolamwini. I definitely would want to follow up on 
scale particularly because even though very few companies tend 
to dominate the field, the technologies that they deploy can be 
used by many people around the world. So one example is a 
company called Megvii that we audited that provides facial 
analysis capabilities. And more than 100,000 developers use 
that technology. So you have a case where a technology that is 
developed by a small group of people can proliferate quite 
widely and that biases can also compound very quickly.
    Chairwoman Johnson. Yes.
    Dr. Tourassi. So in the context of the panel I would like 
to focus on the differences between AI and the technologies 
that you outlined: Quantum computing and others. AI is not 
simply about computers or about algorithms. It's about its 
direct application and use by the humans. So it's fundamentally 
a human endeavor compared to the other technological advances 
that you outlined.
    Chairwoman Johnson. Is it ever too early to start 
integrating ethical thinking and considerations into all AI 
research, education, or training, or how can the Federal 
science agencies incentivize early integration of ethical 
considerations in research and education at universities or 
even at K through 12 level?
    Ms. Whittaker. This is a wonderful question. As I mentioned 
in my written testimony, I think it is never too early to 
integrate these concerns, and I think we need to broaden the 
field of AI research and AI development, as many of my co-
panelists have said, to include disciplines beyond the 
technical. So we need to account for, as we say at AI Now, the 
full stack supply chain accounting for the context in which AI 
is going to be used, accounting for the experience of the 
communities who are going to be classified and whose lives are 
going to be shaped by the systems, and we need to develop 
mechanisms to include these at every step of decisionmaking so 
that we ensure in complex social contexts where these tools are 
being used that they're safe and that the people most at risk 
of harm are protected.
    Chairwoman Johnson. Thank you.
    Mr. Clark. Very briefly, I think NSF can best integrate 
ethics into the aspect of grantmaking and also how you can kind 
of gate for ethics on certain grant applications. And 
additionally, we should put a huge emphasis on K through 12. I 
think if you look at the pipeline of people in AI, they drop 
out earlier than college, and so we should reach them before 
then.
    Mx. Buolamwini. We're already seeing initiatives where even 
kids as young as 5 and 6 are being taught AI, and there's an 
opportunity to also teach issues with bias and the need for 
responsibility. And we're also starting to see competitions 
that incentivize the creation of responsible AI curriculum. 
Mozilla Foundation is conducting one of these competitions 
right now at the undergraduate level.
    We also need to look at ways of learning AI that are 
outside of formal education and look at the different types of 
online courses that are available for people who might not 
enter the field in traditional ways and make sure that we're 
also including ethical and responsible considerations in those 
areas.
    Chairwoman Johnson. OK. I'm over my time, but go ahead 
briefly.
    Dr. Tourassi. As I mentioned in my oral and written 
testimony, the Human Genome Project represents an excellent 
example of why and how the ethical, social, and legal 
implications of AI need to be considered from the beginning, 
not as an afterthought. Therefore, it should follow both of the 
scientific realm and having dedicated workforce in that 
particular space with stakeholders from several different 
entities to certainly protect and remain vigilant in terms of 
the scientific advances and the deployment implications of the 
technology.
    Chairwoman Johnson. Thank you very much. Mr. Baird.
    Mr. Baird. Thank you, Madam Chair.
    Dr. Tourassi, in this Congress the House Science Committee 
has introduced H.R. 617 the Department of Energy Veterans 
Health Initiative Act, a bill which I am also a cosponsor. I'm 
also a Vietnam veteran. And that bill directs the DOE to 
establish a research program in AI and high-performance 
computing that's focused on supporting the VA by helping solve 
big data challenges associated with veterans' health care. In 
your prepared testimony you highlighted Oak Ridge National 
Laboratory's work with the joint DOE-VA Million Veterans 
Program or MVP-CHAMPION (Million Veterans Program Computational 
Health Analytics for Medical Precision to Improve Outcomes 
Now).
    So my question is from your perspective what was the 
collaboration process like with the VA?
    Dr. Tourassi. From the scientific perspective, it has been 
a very interesting and fruitful collaboration. Speaking as a 
scientist who spent a couple of decades in clinical academia 
before I moved to the Department of Energy, I would say that 
there is a cultural shift between the two communities. The 
clinical community will always be focused on translational 
value and short-term gains when the basic scientific community 
will be focused on not short-term solutions but disruptive 
solutions with sustainable value.
    In that respect, these are two complementary forces, and I 
applaud the synergy between basic sciences and applied 
sciences. It is a relay. Without an important application, we 
cannot drive meaningfully basic science and vice versa.
    Mr. Baird. Thank you. And continuing on, what do you feel 
we can accomplish by managing that large database, and what do 
you think will help in the----
    Dr. Tourassi. This answer applies not only to the 
collaboration with the Veterans Administration but in general 
in the healthcare space. Health care is one of the areas that 
will be most impacted by artificial intelligence in the 21st 
century. We have a lot of challenges that do have digital 
solutions that are compute data-intensive and, by extension, 
energy security and energy consumption is an issue.
    In that respect the collaboration between the DOE national 
laboratories with the exceptional resources and expertise they 
have in big data management, secure data management, advanced 
analytics, and with high-performance computing can certainly 
spearhead the transformation and enable the development and 
deployment of tools that will have lasting value in the 
population.
    Mr. Baird. So thank you. And continuing on, in your opinion 
who should be responsible for developing interagency 
collaboration practices when it comes to data sharing and AI?
    Dr. Tourassi. Again, speaking as a scientist, there are 
expertise distributed across several different agencies, and 
all these agencies need to come together to discuss how we need 
to move forward. I can speak for the national laboratories that 
they are an outstanding place as federally funded research and 
development entities to serve as stewards of data assets and of 
algorithms and to facilitate the benchmarking of datasets and 
algorithms through the lifecycle of the algorithms, serving as 
the neutral entities, and while using of course metrics that 
are appropriate for the particular application domain and 
driven by the appropriate other Federal agencies.
    Mr. Baird. So one last question then that deals with your 
prepared testimony. You described the problems that stem from 
siloed data in health care. So that relates to what you just 
mentioned, and you also mentioned the importance of integrating 
nontraditional datasets, including social and economic data. 
Briefly, I'm running close on time, so do you got any thoughts 
on that----
    Dr. Tourassi. You asked two different questions. As I 
mentioned in my testimony, data is the currency not only for 
AI, not only in the biomedical space but across all spaces. And 
in the biomedical space we need to be very respectful about the 
patient's privacy. And that has created silos in terms of where 
the data reside and how we share the data. That in some ways 
delays scientific innovation.
    Mr. Baird. Thank you. And I wish I had time to ask the 
other witnesses questions, but I'm out of time. I yield back, 
Madam Chair.
    Chairwoman Johnson. Thank you very much. Mr. Lipinski.
    Mr. Lipinski. Thank you, Madam Chair. Thank you very much 
for holding this hearing. I think this is something that we 
should be spending a whole lot more time on. The impact that AI 
is having and will have in the future is something we need to 
examine very closely.
    I really want to see AI develop. I understand all the great 
benefits that can come from it, but there are ethical questions 
that--tremendous number of things that we have not even had to 
deal with before.
    I have introduced the Growing Artificial Intelligence 
Through Research, or GrAITR Act here in the House because I'm 
concerned about the current state of AI R&D here in the U.S. 
There's a Senate companion, which was introduced by my 
colleagues Senators Heinrich, Portman, and Schatz. Now, I want 
to make sure that we do the technical research but also have to 
do the research and see what we may need to do here in Congress 
to let--AI devices are developed consistent with our American 
values.
    I have focused a lot on this Committee because I'm a 
political scientist. I focus a lot on the importance of social 
science, and I think it's critically important that social 
science is not left behind when it comes to being funded 
because social science has applications to so much technology 
and certainly in AI.
    So I want to ask, when it comes to social science 
research--and I'll start with Ms. Whittaker--what gaps do you 
see in terms of the social science research that has been done 
on AI, and what do you think can and should be done and what 
should we be doing here in Washington about this?
    Ms. Whittaker. Thank you. I love this question because I 
firmly agree that we need a much more broad disciplinary 
approach to studying AI. To date, most of the research done 
concerning AI is technical research. Social science or other 
disciplinary perspectives might be tacked on at the end, but 
ultimately the study of AI has not traditionally been done 
through a multi- or interdisciplinary lens.
    And it's really important that we do this because the 
technical component of AI is actually a fairly narrow piece. 
When you begin to deploy AI in contexts like criminal justice 
or hiring or education, you are integrating technology in 
domains with their own histories, legal regimes, and 
disciplinary expertise. So the fields with domain expertise 
need to be incorporated at the center of the study of AI, to 
help us understand the contexts and histories within which AI 
systems are being applied.
    At every step, from earliest development to deployment in a 
given social context, we need to incorporate a much broader 
range of perspectives, including the perspectives of the 
communities whose lives and opportunities will be shaped by AI 
decision making.
    Mr. Lipinski. Mr. Clark?
    Mr. Clark. OpenAI, we recently hired our first social 
scientist, so that's one. We need obviously many more. And we 
wrote an essay called, ``Why AI Safety Needs Social 
Scientists.'' And the observation there is that, along with 
everything Ms. Whittaker said, we should embed social 
scientists with technical teams on projects because a lot of AI 
projects are going to become about values, and technologists 
are not great at understanding human values but social 
scientists are and have tools to use and understand them. So my 
specific pitch is to have federally funded Centers of 
Excellence where you bring social scientists together with 
technologists to work on applied things.
    Mr. Lipinski. Thank you. Anyone else?
    Mx. Buolamwini. So I would say in my own experience reading 
from the social sciences actually enabled me to bring new 
innovations to computer vision. So in particular my research 
talks about intersectionality, which was introduced by Kimberle 
Crenshaw, a legal scholar who is looking at antidiscrimination 
law, and showed that if you only did single-access evaluation, 
let's say you looked at discrimination by race or 
discrimination by gender, people who were at the intersection 
were being missed.
    And I found that this was the same case for the evaluation 
of the effectiveness of computer vision AI systems. So, for 
example, when I did the test of Amazon, when you look at just 
men or women, if you have a binary, if you look at darker skin 
or lighter skin, you'll see some discrepancies. But when you do 
an intersectional analysis, that's where we saw 0 percent error 
rates for white men versus 31 percent error rates for women of 
color. And it was that insight from the social sciences to 
start thinking about looking at intersectionality. And so I 
would posit that we not only look at social sciences being 
something that is a help but as something that is integral.
    Dr. Tourassi. As a STEM scientist, I do not speak to the 
gaps in social sciences, but I know from my own work that for 
AI technology to be truly impactful, the STEM scientists need 
to be deeply embedded in the application space to work very 
closely with the users so that we make sure that we answer the 
right questions, not the questions that we want to answer as 
engineers.
    And in the biomedical space, we need to be thinking not 
only about social sciences. We need to be thinking about 
patient advocacy groups as well.
    Chairwoman Johnson. Thank you very much. Dr. Babin.
    Mr. Babin. Thank you, Madam Chair. Thank you, witnesses, 
for being here today.
    Mr. Clark and Dr. Tourassi, I have the privilege of 
representing southeast Texas, which includes the Johnson Space 
Center. And as the Ranking Member of the Subcommittee on Space 
and Aeronautics, I've witnessed the diverse ways that NASA has 
been able to use and develop AI, optimizing research and 
exploration, and making our systems and technology much more 
efficient.
    Many of the new research missions at NASA have been 
enhanced by AI in ways that were not previously even possible. 
As a matter of fact, AI is a key piece to NASA's next rover 
mission to Mars, and we could see the first mining of asteroids 
in the Kuiper belt with the help of AI.
    I say all of this to feature the ways that AI is used in 
the area of data collection and space exploration but to 
highlight private-public partnerships that have led to several 
successful uses of AI in this field. Where do you see other 
private-public partnership opportunities with Federal agencies 
increasing the efficiency and the security using AI? Dr. 
Tourassi, if you'll answer first, and then Mr. Clark.
    Dr. Tourassi. So absolutely. The DOE national labs, as 
federally funded research and development entities, we work 
very closely with industry in terms of licensing and deploying 
technology in a responsible way. So this is something that is 
already hardwired in how we do science and how we translate 
science.
    Mr. Babin. Thank you very much. Mr. Clark.
    Mr. Clark. My specific suggestion is joint work on 
robustness, predictability, and broadly, safety, which 
basically decodes to I have a big image classifier. A person 
from industry and a person from government both want to know if 
that's going to be safe and it will serve people effectively, 
and we should pursue joint projects in this area.
    Mr. Babin. Excellent. Thank you very much. And again, same 
two, what would it mean for the United States if another 
country were to gain dominance in AI, and how do we maintain 
global leadership in this very important study and issue? Yes, 
ma'am.
    Dr. Tourassi. Absolutely it is imperative for our national 
security and economic competitiveness that we maintain--we are 
at the leading edge of the technology and we make responsible 
R&D investments. In an area that I believe that we can lead the 
world is that we can actually lead not only with the 
technological advances but with what we talked about, socially 
responsible AI. We can lead that dialog, that conversation for 
the whole world.
    Mr. Babin. Excellent.
    Dr. Tourassi. And that differentiates us from other 
entities investing in this space.
    Mr. Babin. Yes, thank you. Thank you very much. Mr. Clark.
    Mr. Clark. So I agree, but just to sort of reiterate this, 
AI lets us encode values into systems that are then scaled 
against sometimes entire populations, and so along with us 
needing to work here in the United States on what appropriate 
values are for these systems, which is its own piece of work, 
as we've talked about, if we fail here, then the values that 
our society lives under are partially determined by whichever 
society wins in AI. And so the values that that society in 
codes become the values that we experience. So I think the 
stakes here are societal in nature, and we should not think of 
this as about a technological challenge but how we as a society 
want to become better. And the success here will be the ability 
to articulate values that the rest of the world thinks are the 
right ones to be embedded, so it's a big challenge.
    Mr. Babin. It is a big challenge. If we do not maintain our 
primacy in this, then other countries who might be a very 
repressive with less, you know, lofty values that I assume 
that's what you're talking about, could put these into effect 
in a very detrimental way. So thank you very much. I appreciate 
it, and I yield back, Madam Chair.
    Chairwoman Johnson. Thank you very much. Ms. Bonamici.
    Ms. Bonamici. Thank you to the Chair and the Ranking 
Member, but really thank you to our panelists here.
    I first want to note that the panel we have today is not 
representative of people who work in the tech field, and I 
think that that is something we need to be aware of because I 
think it's still probably about 20 percent women, so I just 
want to point that out.
    This is an important conversation, and I'm glad we're 
having it now. I think you've sent the message that it's not 
too late, but we really need to raise awareness and figure out 
if there's policies, if we're talking about the societal part. 
We have here in this country some of the best scientists, 
researchers, programmers, engineers, and we've seen some pretty 
tremendous progress.
    But over the years we've talked and spoken in this 
Committee--and I represent a district in Oregon where we've had 
lots of conversations about the challenges of integrating AI 
into our society, what's happening with the workforce in that 
area, but we really do need to understand better the 
socioeconomic effects and especially the biases that it can 
create. And I appreciate that you have brought those to our 
attention, I mean, particularly for people of color.
    And as my colleagues on this Committee know, I serve as the 
Founder and Co-Chair of the congressional STEAM Caucus to 
advocate for the integration of arts and design into STEM 
fields. In The Innovators, author Walter Isaacson talked about 
how the intersection of arts and science is where the digital 
age creativity is going to occur.
    STEAM education recognizes the benefits of both the arts 
and sciences, and it can also create more inclusive classrooms, 
especially in the K-12 system. And I wanted to ask Mx. 
Buolamwini--I hope I said your name----
    Mx. Buolamwini. Buolamwini.
    Ms. Bonamici. I appreciate that in your testimony you 
mentioned the creative science initiatives that are 
incorporating the arts in outreach to more diverse audiences 
that may never otherwise encounter information about the 
challenges of AI. And I wonder if you could talk a little bit 
about how we in Congress can support partnerships between 
industry, academia, stakeholders to better increase awareness 
about the biases that exist because until we have more 
diversity--you know, it's all about what goes in, that sort of 
algorithmic accountability I think if you will. And if we don't 
have diversity going into the process, it's going to affect 
what's coming out, so----
    Mx. Buolamwini. Absolutely. So in addition to being a 
computer scientist, I'm also a poet. And one of the ways I've 
been getting the word out is through spoken word poetry. So I 
just opened an art exhibition in the U.K. in the Barbican 
that's a part of a 5-year traveling art show which is meant to 
connect with people who might otherwise not encounter some of 
the issues that are going on with AI.
    Something I would love for Congress to do is to institute a 
public-wide education campaign. Something I've been thinking 
about is a project called Game of Tones, product testing for 
inclusion. So what you could do----
    Ms. Bonamici. Clever name already.
    Mx. Buolamwini. So what you could do is use existing 
consumer products so maybe it's voice recognition, tone of 
voice, maybe it's what we're doing with analyzing social media 
feeds, tone of text, maybe it's something that's to do with 
computer vision, and use that as a way of showing how the 
technologies people encounter every day can encode certain 
sorts of problems, and most importantly, what can be done about 
it. So it's not just we have these issues, but here are steps 
forward, here are resources----
    Ms. Bonamici. That's great.
    Mx. Buolamwini [continuing]. You can reach out----
    Ms. Bonamici. I serve on the Education Committee as well. I 
really appreciate that.
    Ms. Whittaker, your testimony talks about when these 
systems fail, they fail in ways that harm those who are already 
marginalized. And you mentioned that we have to encounter an AI 
system that was biased against white men as a standalone 
identity. So increasing diversity of course in the workforce is 
an important first step, but what checks can we put in place to 
make sure that historically marginalized communities are part 
of the decisionmaking process that is leading up to the 
deployment of AI?
    Ms. Whittaker. Absolutely. Well, as we--as I discussed in 
my written testimony and as AI Now's Rashida Richardson has 
shown in her research, one thing we need to do is look at the 
how the data we use to inform AI systems is created, because of 
course all data is a reflection of the world as it is now, and 
as it was in the past.
    Ms. Bonamici. Right. Right.
    Ms. Whittaker [continuing]. And the world of the past has a 
sadly discriminatory history. So that data runs the risk of 
imprinting biased histories of the past into the present and 
the future, and scaling these discriminatory logics across our 
core social institutions.
    Ms. Bonamici. What efforts are being done at this point in 
time to do that?
    Ms. Whittaker. There are some efforts. A paper called 
Datasheets for Datasets created a framework to provide AI 
researchers and practitioners with information about the data 
they were using to create AI systems, including information 
about the collection and creation processes that shaped a given 
dataset.
    In a law review article titled ``Dirty Data, Bad 
Predictions: How Civil Rights Violations Impact Police Data, 
Predictive Policing Systems, and Justice,'' AI Now's Director 
of Policy Research, Rashida Richardson, found that in at least 
9 jurisdictions, police departments that were under government 
oversight or investigation for racially biased or corrupt 
policing practices were also deploying predictive policing 
technology.
    Ms. Bonamici. That's very concerning.
    Ms. Whittaker [continuing]. What this means is that corrupt 
and racist policing practices are creating the data that is 
training these predictive systems. With no checks, and no 
national standards on how that data is collected, validated, 
and applied.
    Ms. Bonamici. Thank you. And I see I've--my time is 
expired. I yield back. Thank you, Madam Chair.
    Chairwoman Johnson. Thank you very much. Mr. Marshall.
    Mr. Marshall. Thank you, Madam Chair.
    My first question for Dr. Tourassi, in your prepared 
testimony you highlighted that the DOE's partnership with the 
Cancer Institute Surveillance, Epidemiology, and End Results 
program, can you explain the data collection process for this 
program and how the data is kept secure? In what ways have you 
noted the DOE accounts for artificial intelligence ethics, 
bias, or reliability at this program? And you also mentioned 
things like cancer biomarkers that AI are currently unable to 
predict to produce information on this.
    Dr. Tourassi. The particular partnership with the National 
Cancer Surveillance program is organized as follows. Cancer is 
a reportable disease in the U.S. and in other developed 
countries. Therefore, every single cancer case that is detected 
in the U.S. is recorded in the local registry. When the 
partnership was established, the partnership included voluntary 
participation of cancer registries that wanted to contribute 
their data to advance R&D.
    The data resides in the secure data enclave at the Oak 
Ridge National Lab where we have the highest regulations and 
accreditations for holding the data. Access to the data is 
given responsibly to researchers from the DOE complex that have 
the proper training to access the data, and that's--that is our 
test bed for developing AI technology.
    The first targets of the science was how we can develop 
tools that help cancer registries become far more efficient in 
what they do. It's not about replacing the individual. It's 
actually helping them do something better and faster. So the 
first set of tools that are deployed are exactly that, to 
extract information from pathology reports that the cancer 
registrars have to report on an annual basis to NCI, and we 
free time for them to devote to other tasks that are far more 
challenging for artificial intelligence and--such as the 
biomarker extraction that you talked about.
    Mr. Marshall. OK. Thank you so much. I'll address my next 
question to Mr. Clark but then probably open it up to the rest 
of the panel after that. How do you incentivize developers to 
build appropriate safety and security into products when the 
benefits may not be immediately evident to users?
    Mr. Clark. I think technologists always love competing with 
each other, and so I'm pretty bullish on the idea of creating 
benchmarks and challenges which can encourage people to enter 
systems into this. You can imagine competitions for who's got 
the least biased system, which actually is something you can 
imagine commercial companies wanting to participate in. You do 
need to change the norms of the development community so that 
individual developers see this as important, and that probably 
requires earlier education and adding an ethics component to 
developer education as well.
    Mr. Marshall. OK. Ms. Whittaker, would you like to respond 
as well?
    Ms. Whittaker. Absolutely. I would add to what Mr. Clark's 
points that it's also important to ensure the companies who 
build and profit from these systems are held liable for any 
harms. Companies are developing systems that are having a 
profound impact on the lives and livelihoods of many members of 
the public. These companies should be responsible for those 
impacts, and those with the most power inside these companies 
should be held most responsible. This is an important point, 
since most AI developers are not working alone, but are 
employed within one of these organizations, and the incentives 
and drivers governing their work are shaped by the incentives 
of large tech corporations.
    Mr. Marshall. OK, thanks. Yes, Mx. Buolamwini, sorry I 
missed the introductions there.
    Mx. Buolamwini. Buolamwini. You're fine. And so something 
else we might consider is something akin to public interest law 
clinics but are meant for public interest technology so that 
it's part of your computer science or AI education that you're 
working with a clinic that's also connected to communities that 
are actually harmed by some of these processes. So it's part of 
how you come to learn.
    Mr. Marshall. OK. Thanks. And, Dr. Tourassi, you get to bat 
cleanup. Anything you want to add?
    Dr. Tourassi. I don't really have anything to add to this 
question. I think the other panelists captured it very well.
    Mr. Marshall. Yes, thank you so much, and I yield back.
    Chairwoman Johnson. Thank you very much. Ms. Sherrill.
    Ms. Sherrill. Thank you. And thank you to all the panelists 
for coming today.
    This hearing is on the societal and ethical implications of 
AI, and I'm really interested in the societal dimension when it 
comes to the impact AI is having on the workforce and how it's 
increasingly going to shape the future of work. So my first 
question to the panel is what will the shift in AI mean for 
jobs across the country? Will the shift to an economy 
increasingly transformed by AI be evenly distributed across 
regions, across ethnic groups, across men and women? Will it be 
evenly distributed throughout our job sectors? And how do you 
see the percentages of how AI is impacting the workforce 
changing over the years? Which portion of our workforce will be 
impacted directly by AI and how will that look for society?
    Ms. Whittaker. Thank you. Well, I think we're already 
seeing AI impact the workforce and impact what it means to have 
a job. We're seeing AI integrated into hiring and recruiting. A 
company called HireVue now offers video interview services that 
claim to be able to tell whether somebody is a good candidate 
based on the way they move their face, their micro-expressions, 
their tone of voice. Now, how this works across different 
populations and different skin tones and different genders is 
unclear because this technology is proprietary, and thus not 
subject to auditing and public scrutiny.
    We are seeing AI integrated into management and worker 
control. A company called Cogito offers a service to call 
centers that will monitor the tone of voice and the affect of 
people on the phone and give them instructions to be more 
empathetic, or to close the call. It also sends their managers 
a ranking of how they're doing, and performance assessments can 
then be based on whatever the machine determines this person is 
doing well or doing poorly.
    We're seeing similar mechanisms in Amazon warehouses where 
workers' productivity rates are being set by algorithms that 
are calibrated to continually extract more and more labor. 
We've actually seen workers in Michigan walk out of warehouses 
protesting what they consider inhumane algorithmic management.
    Overall, we are already seeing the nature of work reshaped 
by AI and algorithmic systems, which rely on worker tracking 
and surveillance and leave no room for workers to contest or 
even consent to the use of such systems. Ultimately, this 
increases the power of employers, and significantly weakens the 
power of workers.
    Ms. Sherrill. And what about--and I'll get--we can go back 
to you, too, and you can go back to the question if you want, 
Mx. Buolamwini, but what--to what extent is it going to 
transfer the ability of people to get jobs and get into the 
workforce?
    Mx. Buolamwini. So one thing I wanted to touch upon is how 
AI is being used to terminate jobs and something I call the 
exclusion overhead where people who are not designed for the 
system have to extend more energy to actually be a part of the 
system. One example comes from several reports of transgendered 
drivers being kicked off of Uber accounts because when they 
used a fraud detection system, which uses facial recognition to 
see if you are who you say you are, given that they present 
differently, there were more checks required. So one driver 
reported that over an 18-month period she actually had to 
undergo 100 different checks, and then eventually her account 
was deactivated.
    On May 20, another Uber driver actually sued Uber for more 
than $200,000 after having his account deactivated because he 
had to lighten his photos so that his face could be seen by 
these systems, and then there was no kind of recourse, no due 
process and that he couldn't even reach out to say the reason I 
lightened my photo, right, was because the system wasn't 
detecting me.
    Ms. Sherrill. It was failing?
    Mx. Buolamwini. Yes.
    Ms. Sherrill. And so also in my district I--and this is to 
the panel again. I've seen our community colleges and 
polytechnical schools engaging in conversations with businesses 
about how they can best train workers to meet the new 
challenges of the AI workforce and provide them with the 
skills. Structurally, how does secondary education need to 
adjust to be able to adapt to the changing needs and the 
changing challenges that you're outlining? How can we better 
prepare students to enter into this workforce?
    Mr. Clark. I'll just do a very quick point. We do not have 
the data to say how AI will affect the economy. We have strong 
intuitions from everyone who works in AI that it will affect 
the economy dramatically. And so I'd say before we think about 
what we need to teach children, we need a real study of how 
it's impacting things. None of us are able to give you a number 
on employment----
    Ms. Sherrill. And just because I have 6 seconds, what would 
you suggest to us to focus on in that study?
    Mr. Clark. I think it would be useful to look at the 
tractability for in-development technologies to be applied at 
large scale throughout the economy and to look at the economic 
impacts of existing things like how we've automated visual 
analysis and what economic impacts that has had because it's 
been dramatic but we don't have the data from which to talk 
about it.
    Ms. Sherrill. Thank you. I yield back. Thank you, Madam 
Chair.
    Chairwoman Johnson. Thank you very much. Mr. Gonzalez.
    Mr. Gonzalez. Thank you, Madam Chair, and thank you to our 
panel for being here today on this very important topic.
    Mr. Clark, I want to start my line of questioning with you. 
It's my belief that the United States needs to lead on machine 
learning and AI if for no other reason for the sake of 
standards development, especially when you think about the 
economic race between ourselves and China. One, I guess, first 
question, do you share that concern; and then two, if yes, what 
concerns would you have in a world where China is the one that 
is sort of leading the AI evolution if you will and dictating 
standards globally?
    Mr. Clark. Yes, I agree. And to answer your second 
question, I think if you don't define the standard, then you 
have less ability to figure out how the standard is going to 
change your economy and how you can change industry around it, 
so it just puts you behind the curve. It means that your 
economic advantage is going to be less, you're going to be less 
well-oriented in the space, and if you don't invest in the 
people to go and make those standards, then you're going to 
have lots of highly qualified reasonable people from China 
making those. And they'll develop skills, and then we won't get 
to make them.
    Mr. Gonzalez. Yes, thank you. And then, Dr. Tourassi, 
another question that I have is around data ownership and data 
privacy. You know, we talk about the promise of AI a lot, and 
it is certainly there. I don't know that we talk enough about 
how to empower individuals with control over their data who are 
ultimately the ones providing the value by--without even 
choosing to provide all this data. So in your opinion how 
should we at the Committee level and as a Congress think about 
balancing that tradeoff between data privacy and ownership for 
the individual and the innovation that we know is coming?
    Dr. Tourassi. This is actually an excellent question and 
fundamental in the healthcare space because, in the end, all 
the AI algorithmic advances that are happening wouldn't be 
possible if the patients did not contribute their data and if 
the healthcare providers did not provide the services that 
collect the data. So in the end who owns the product?
    This is a conversation that requires a societal--as a 
society to have these pointed conversations about these issues 
and to bring all the different stakeholders into place. Privacy 
and ownership mean different things to different people. One 
size will not fit all. We need to have--to build a framework in 
place so that we can address these questions per application 
domain, per situation that arises.
    Mr. Gonzalez. Thank you. And sort of--this one's maybe for 
everybody, sort of a take on that. Deep fakes is something that 
we've been hearing a little bit more of lately, and I think the 
risk here is profound where we get into a world where you 
literally cannot tell the difference between me calling you on 
the phone physically or a machine producing my voice. So as we 
think about that, I guess my question would be, how can the NSF 
or other Federal agencies ensure that we have the tools 
available to detect these deep fakes as they come into our 
society? We'll start with Ms. Whittaker.
    Ms. Whittaker. Well, I think this is an area where we need 
much more research funding and much more development. I would 
also expand the--this answer to include looking at the 
environments in which such sensational content might thrive. 
And so you're looking at engagement-driven algorithmic systems 
like Facebook, Twitter, and YouTube. And I think addressing the 
way in which those algorithms surface sensational content is 
something that needs to go hand-in-hand with detection efforts 
because, fundamentally, there is an ecology that rests below 
the surface that is promoting the use of these kind of content.
    Mr. Gonzalez. I completely agree. Thank you.
    Mr. Clark. I agree with these points, and I'd just make one 
point in addition----
    Mr. Gonzalez. Yes.
    Mr. Clark [continuing]. Which is that we need to know where 
these technologies are going. We could have had a conversation 
about deep fakes 2 years ago if you look at the research 
literature----
    Mr. Gonzalez. Yes.
    Mr. Clark [continuing]. And government should invest to 
look at the literature today because there will be other 
challenges similar to deep fakes in our future.
    Mx. Buolamwini. We also need to invest in AI literacy where 
you know that there will be people deploying AI in ways that 
are meant to be intentionally harmful. So I think making sure 
people have an awareness that deep fakes can exist and other 
ways of deception that can arise from AI systems exist as well.
    Mr. Gonzalez. Thank you.
    Dr. Tourassi. So adversarial use of AI technology is a 
reality.
    Mr. Gonzalez. Yes.
    Dr. Tourassi. It's here. Therefore, the investments in R&D 
and having an entity that will serve as the neutral entity to 
steward--to be the steward of the technology and the datasets 
is a very important piece that we need to consider very 
carefully and make calculated investments. This is not a one-
time solution. Something is clean, ready to go. The 
vulnerabilities will always exist, so we need to have the 
processes and the entities in place to mitigate the risks.
    And I go back to my philosophy. I believe in what Marie 
Curie said, ``There is nothing to be feared, only to be 
understood.'' So let's make the R&D investments to understand. 
Make the most of the potential and mitigate the risks.
    Mr. Gonzalez. Thank you. I yield back.
    Chairwoman Johnson. Thank you very much. Mr. McNerney.
    Mr. McNerney. Well, thank you. I thank the Chairwoman, and 
I thank the panelists. The testimony is excellent. I think you 
all have some recommendations that are good and are going to be 
helpful in guiding us to move forward, but I want to look at 
some of those recommendations.
    One of your recommendations, Ms. Whittaker, is to require 
tech companies to waive their secrecy. Now, that sounds great, 
but in practice it's going to be pretty difficult, especially 
in light of our competition on the international scene with 
China and other countries. How do you envision that happening? 
How do you envision tech companies opening up their trade 
secrets without losing the--you know, the competition.
    Ms. Whittaker. Yes, absolutely. And, as I expand on in my 
written testimony, this isn't--the vision of this 
recommendation is not simply that tech companies throw open the 
door and everything is open to everyone. This is specifically 
looking at claims of trade secrecy that are preventing 
accountability. Ultimately, we need public oversight, and 
overly broad claims to trade secrecy are making that extremely 
difficult. A nudge from regulators would help here.
    We need provisions that waive trade secrecy for independent 
auditors, for researchers examining issues of bias and fairness 
and inaccuracy, and for those examining the contexts within 
which AI systems are being licensed and applied. That last 
point is important. A lot of the AI that's being deployed in 
core social domains is created by large tech companies, who 
license this AI to third parties. Call it an "AI as a service" 
business model. These third parties apply tech company AI in a 
variety of contexts. But the public rarely knows where and how 
it's being used, because the contracts between the tech 
companies and the third parties are usually secret.
    Even the fact that there is a contract between, say, Amazon 
and another entity to license, say, facial recognition is not 
something that the public who would be profiled by such systems 
would know. And that makes tracing issues of bias, issues of 
basic freedoms, issues of misuse extremely hard.
    Mr. McNerney. Thank you for that answer. Mr. Clark, I love 
the way you said that AI encodes the value system of its 
coders. You cited three recommendations. Do you think those 
three recommendations you cited will ensure a broader set of 
values would be incorporated in AI systems?
    Mr. Clark. I described them as necessary but not 
sufficient. I think that they need to be done along with a 
larger series of things to incur values. Values is a personal 
question. It's about how we as a society evaluate what fairness 
means in a commercial marketplace. And I think that AI is going 
to highlight all of the ways in which our current systems for 
sort of determining that need additional work. So I don't have 
additional suggestions beyond those I make, but is suspect 
they're out there.
    Mr. McNerney. And the idea to have NIST create standards, I 
mean, that sounds a good idea.
    Mr. Clark. Yes, my general observation is we have a large 
number of great research efforts being done on bias and issues 
like it, and if we have a part of government convene those 
efforts and create testing suites, we can create the source of 
loose standards that other people can start to test in, and it 
generates more data for the research community to make 
recommendations from.
    Mr. McNerney. Thank you. Mx. Buolamwini, you recommended a 
5 percent AI accountability tax. How did you arrive at that 
figure, and how do you see that being implemented?
    Mx. Buolamwini. So this one was a 0.5 percent tax, and 
you----
    Mr. McNerney. Point 5 percent, thank you.
    Mx. Buolamwini [continuing]. And you have the Algorithmic 
Accountability Act of 2019 that was sponsored by Representative 
Yvette Clark. And I think it could be something that is added 
to that particular piece of legislation. And so the requirement 
that they specifically have is this would be for companies that 
are making over $50 million in revenue or average gross, and 
then also it would either apply to companies that have or 
possess over one million consumer devices or reach more than 
one million consumers. So I could see it being integrated into 
a larger framework that's already about algorithmic 
accountability.
    Mr. McNerney. Thank you. Ms. Whittaker and Mx. Buolamwini, 
you both advocated--in fact, all of you did--for a more diverse 
workforce. I've written legislation to do that. It really 
doesn't go anywhere around here. What's a realistic way to get 
that done? How do we diversify the workforce here?
    Ms. Whittaker. I would hope that lawmakers continue to push 
legislation that would address diversity in tech, because put 
frankly, we have a diversity crisis on our hands. It has not 
gotten better; it has gotten worse in spite of years and years 
of diversity rhetoric and P.R. We're looking at an industry 
where----
    Mr. McNerney. So you think government is the right tool to 
make that happen?
    Ms. Whittaker. I think we need to use as many tools as we 
have. I think we need to mandate pay equity and transparency. 
We need to mandate much more thorough protections for people 
who are the victims of sexual harassment in the workplace. This 
is a problem that tech has. At Google, for example, more than 
half of the workforce is made up of contract workers. And this 
is true across all job types, not just janitors and service 
workers. You have engineers, designers, project managers, 
working alongside their full-time colleagues, without the 
privileges of full employment, and thus without the safety to 
push back against inequity.
    I would add that we also need to look at the practice of 
hiring increasing numbers of contract workers. These workers 
are extremely vulnerable to harassment and discrimination. They 
don't have the protection of full-time employees. And you have 
seen at Google at this point more than half the workforce is 
made up of contract workers across all job types, so this isn't 
just janitorial staff or service workers. This is engineers, 
designers, team leads that don't have the privileges of full 
employment and thus don't have the safety to push back against 
inequity.
    Mr. McNerney. I've run out of time, so I can't pursue that. 
I yield back.
    Chairwoman Johnson. Thank you very much. Miss Gonzalez-
Colon.
    Miss Gonzalez-Colon. Thank you, Madam Chair. And yes, I 
have two questions. Sorry, I was running from another markup. 
Dr. Tourassi, the University of Puerto Rico, Mayaguez Campus, 
which is in my district, is an artificial intelligence 
education and research institute. The facility exposes young 
students to the field of artificial intelligence. Their core 
mission is to advance knowledge and provide education in 
artificial intelligence in theory, methods, system, and 
applications to human society and to economic prosperity.
    My question will be, in your view, how can we engage with 
institutes of higher education to promote similar initiatives 
or efforts, keeping in mind generating interest in artificial 
intelligence in young students from all areas and how can we be 
secure that what is produced later on is responsible, ethical, 
and financially profitable?
    Dr. Tourassi. So, as you mentioned, the earlier we start 
recruiting workforce, our trainees that reflect the actual 
workforce with education and the diversity that is needed, that 
is extremely important. When the AI developers reflect the 
actual user community, then we know that we have arrived. That 
cannot be achieved only with academic institutions. This is a 
societal responsibility for all of us.
    I can tell how the national laboratories are working in 
this space. We are enhancing the academic places and 
opportunities by offering internship opportunities to students 
who haven't otherwise--they do not come from research 
institutions, and this is the first time for them that they can 
work in a thriving research place. So we need to be thinking 
more outside the box and how we can all work synergistically 
and continuously on this.
    Miss Gonzalez-Colon. Thank you. I want to share with you as 
well that my office recently had a meeting with a 
representative of this panel organization, and they were 
commenting of the challenges they have on approaching American 
manufacturers, specifically car manufacturers on accessible 
autonomous vehicles. Several constituents with disabilities 
rely on them or on similar equipment for maintaining some 
degree of independence and rehabilitation. My question would 
be, in your view how can we engage that private-sector--you 
were just talking a few seconds ago--and the manufacturers 
that--so we not only ensure that artificial intelligence 
products are ethical and inclusive but provide opportunities 
for all sectors of the community, in other words, make this 
working for everyone? How can we arrange that?
    Dr. Tourassi. If I understood your question, you're asking 
how we can build more effective bridges?
    Miss Gonzalez-Colon. In your view, yes, it's kind of the 
same thing.
    Dr. Tourassi. And again, I can speak to how we are building 
these bridges as national laboratories working with both 
academic and research institutions, as well as with private 
industry creating very thriving hubs for researchers to engage 
in societally impactful science and develop solutions, end-to-
end solutions from R&D all the way to the translation of these 
products. I see the federally funded R&D entities such as 
national labs being one form of these bridges.
    Miss Gonzalez-Colon. How can people with disabilities be 
counted for when we talk about artificial intelligence?
    Dr. Tourassi. Well, as I said, one size will not fit all. 
It will come down to the particular application domain, so it 
is our responsibility as scientists to be mindful of that. And 
while working, deeply embedded in the application space with 
the other sciences that will educate us on where the gaps are, 
that's how we can save ourselves from all the blind spots.
    Miss Gonzalez-Colon. You said in your testimony--you 
highlighted the importance of an inclusive and diverse 
artificial intelligence workforce. For you, what is the 
greatest challenge in the United States of developing this kind 
of workforce?
    Dr. Tourassi. As a female STEM scientist and often the 
token woman for the past three decades in the field, the 
biggest challenge we have is not actually recruiting a diverse 
set of trainees but also sustaining them in the workforce. And 
I passionately believe that we need to change our notion of 
what is leadership. There are different models of leadership 
and the more we become comfortable with different styles of 
leadership. In my own group, in my own team, I make sure that I 
have a very diverse group of researchers, including people with 
disabilities, doing phenomenal AI research work. So it comes 
down to not only developing policies but what is our also 
individual responsibility as citizens.
    Miss Gonzalez-Colon. Thank you. And I yield back.
    Chairwoman Johnson. Thank you very much. Mr. Tonko.
    Mr. Tonko. Thank you, Chairwoman Johnson, for holding the 
hearing, and thank you to our witnesses for joining us.
    Artificial intelligence is sparking revolutionary change 
across industries and fields of study. Its benefits will drive 
progress in health care, climate change, energy, and more. AI 
can help us diagnose diseases early by tracking patterns of 
personal medical history. It can help identify developing 
weather systems, providing early warning to help communities 
escape harm.
    Across my home State of New York, companies, labs, and 
universities are conducting innovative research and education 
in AI, including the AI Now Institute at New York University 
represented here with us today by Co-Founder Meredith 
Whittaker. Students at Rensselaer Polytechnic Institute in Troy 
studying machine logic at the Rensselaer AI and Reasoning Lab--
work that could transform our understanding of human-machine 
communication.
    IBM and SUNY Polytechnic Institute have formed a 
groundbreaking partnership to develop an AI hardware lab in 
Albany focused on developing computer chips and other AI 
hardware. That partnership is part of a broader $2 billion 
commitment by IBM in my home State. This work is more than 
technical robotics. University of Albany researchers are 
working on ways to detect AI generated deep fake video 
alterations to prevent the spread of fake news, an issue that 
has already impacted some of our colleagues in Congress. These 
researchers are using metrics such as human blinking rates to 
weed out deep fake videos from authentic ones.
    AI presented great benefits, but it is a double-edged 
sword. In some studies, AI was able to identify individuals at 
risk for mental health conditions just by scanning their social 
media accounts. This can help medical professionals identify 
and treat those most at risk, but it also raises privacy issues 
for individuals.
    We have also seen evidence of data and technical bias that 
underrepresents or misrepresents people of color in everything 
from facial recognition to Instagram filters. As a Committee, I 
am confident that we will continue to explore both the benefits 
and risks associated with AI, and I look forward to learning 
more from our witnesses today.
    And my question for all panelists is this: What is an 
example that illustrates the potential of AI? And what is an 
example that illustrates the risks? Anyone? Ms. Whittaker.
    Ms. Whittaker. Yes, I will use the same example for both 
because I think this gives a sense of the double-edged sword of 
this technology. Google's DeepMind research lab applied AI 
technology to reduce the energy consumption of Google's data 
centers. And by doing this, they claim to have reduced Google's 
data center energy bill by 40%. They did this by training AI on 
data collected from these data centers, and using it to 
optimizing things like when a cooling fan was turned on, and 
otherwise much more precisely calibrate energy use to ensure 
maximum efficiency. So here we have an example of AI being used 
in ways that can reduce energy consumption, and potentially 
address climate issues.
    But we've also seen recent research that exposes the 
massive energy cost of creating AI systems, specifically the 
vast computational infrastructure needed to train AI models. A 
recent study showed that the amount of carbon produced in the 
process of training one natural language processing AI model 
was the same as the amount produced by five cars over their 
lifetimes. So even if AI, when it's applied, can help with 
energy consumption, we're not currently accounting for the vast 
consumption required to produce and maintain AI technologies.
    Mr. Tonko. Thank you. Anyone else?
    Mr. Clark. Very, very quickly----
    Mr. Tonko. Mr. Clark.
    Mr. Clark. One of the big potentials of AI is in health 
care and specifically sharing datasets across not just, you 
know, States and local boundaries but eventually across 
countries. I think we can create global-class diagnostic 
systems to save people's lives.
    Now, a risk is that all of these things need to be 
evaluated empirically after we've created them for things like 
bias, and I think that we lack the tools, funding, and 
institutions to do that empirical evaluation of developed 
systems safely.
    Mr. Tonko. OK. Mx. Buolamwini?
    Mx. Buolamwini. Yes, so I look at computer vision systems 
where I see both cost for inclusion and cost for exclusion. So 
when you're using a vision system to, say, detect a pedestrian, 
you would likely want that to be as accurate as possible as to 
not hit individuals, but that's also the same kind of 
technology you could put on a drone with a gun to target an 
individual as well. So making sure that we're balancing the 
cost of inclusion and the cost of exclusion and putting in 
context limitations where you say there are certain categorical 
uses we are not considering.
    Mr. Tonko. Thank you. And Dr. Tourassi, please?
    Dr. Tourassi. Yes. I agree with Mr. Clark that in the 
healthcare space the promise of AI is evident with clinical 
decision support systems, for example, for reducing the risk of 
medical error in the diagnostic interpretation of systems. 
However, that same field that shows many great examples is full 
of studies that overhype expectations of universal benefits 
because these studies are limited to one medical center, to a 
small population.
    So we need to become, as I said, educated consumers of the 
technology and the hype, the news that are out there. We need 
to be asking these questions, how extensively this tool has 
been used, across how many populations, how many States, how 
many--when we dive into the details and we do that benchmarking 
that Mr. Clark alluded to, then we know that the promise is 
real. And there are studies that have done that with the rigor 
required.
    Mr. Tonko. Thank you so much. And with that, I yield back, 
Madam Chairwoman.
    Chairwoman Johnson. Thank you very much. Mr. Beyer.
    Mr. Beyer. Thank you, Madam Chair. And thank you for 
holding this hearing. I really want to thank our four panelists 
for really responsible, credible testimony. I'm going to save 
all of these printed texts and share them with many friends.
    You know, the last 4 years on the Science Committee, AI has 
come up again and again and again. And we've only had glancing 
blows at the ethics or the societal implications. We've mostly 
been talking the math and the machine learning and the promise. 
Even yesterday, we had Secretary Rick Perry--I can't remember 
which department he represents, but he was here yesterday--just 
kidding--raving about artificial intelligence and machine 
learning.
    And thanks, too, for the concrete recommendations; we don't 
often always get that in the Science Committee. But I counted. 
There were 24 concrete recommendations that you guys offered, 
everything from waiving trade secrecy to benchmarking machine 
learning for its societally harmful failures to even an AI tax, 
which my friends on Ways and Means will love.
    But the one ethical societal failure that we haven't talked 
about is sort of driven by everything you did. One of your 
papers talked about the 300,000-time increase in machine-
learning power in the last 5 or 6 years compared to Moore's 
law, which would have been 12 times in the same time. In 
Virginia, we have something like 35,000 AI jobs we're looking 
to fill right now. And one of the other papers talked about 
awareness. And we have certainly had computer scientists here 
in the last couple of years who talked about ambition 
awareness.
    So let me ask the Skynet question. What do you do about the 
big picture when--well, as my daughter already says, Wall 
Street is almost completely run right now by machine learning. 
I mean, it's all algorithms. I visited the floor of the New 
York Stock Exchange a couple weeks ago with the Ways and Means 
Committee, and there were very few people there. The people all 
disappeared. It's all done algorithmically.
    So let's talk about the big picture. Any thoughts on the 
big-picture societal implication of when AI is running all the 
rest of our lives?
    Mr. Clark. I think it's pretty clear that AI systems are 
scaling and they're going to become more capable and at some 
point we'll allow them to have larger amounts of autonomy. I 
think the responsible thing to do is to build institutions 
today that will be robust to really powerful AI systems. And 
that's why I'm calling for large-scale measurement assessments 
and benchmarking of existing systems deployed today. And that's 
because if we do that work today, then as the systems change, 
we'll have the institutions that we can draw on to assess the 
growing opportunities and threats of these systems. I really 
think it's as simple as being able to do weather forecasting 
for this technical progress, and we lack that infrastructure 
today.
    Mr. Beyer. Mx. Buolamwini, I'm going to mispronounce your 
name, but you're at MIT, you're right next to Steve Pinker at 
Harvard. They're doing all this amazing work on the evolution 
of consciousness and consciousness as an emergent property, one 
you don't necessarily intend, but there it is. Shouldn't we 
worry about emergent consciousness in AI, especially as we 
build capacity?
    Mx. Buolamwini. I mean, the worry about conscious AI I 
think sometimes misses the real-world issues of dumb AI, AIs 
that are not well-trained, right? So when I go back to an 
example I did in my opening statement, I talk about a recent 
study that came out showing pedestrian tracking technologies 
had a higher miss rate for children, right, as compared to 
adults. So here we were worried about the AIs becoming 
sentient, and the ones that are leading to the fatalities are 
the ones that weren't even well-trained.
    Mr. Beyer. Well, I would be grateful--among the 24 
thoughtful, excellent suggestions you made--and hopefully, we 
will follow up on many of them or the ones that are 
congressionally appropriate--is one more that doesn't deal with 
the kids that get killed, which is totally good, you know, the 
issues of ageism, sexism, racism that show up, those are all 
very, very meaningful, but I think we also need to look long 
term, which is what good leaders do--about the sentience issue 
and how we protect, not necessarily make sure it doesn't happen 
but how we protect. And thank you very much for being part of 
this.
    Madam Chair, I yield back.
    Chairwoman Johnson. Thank you very much. Mr. Lamb.
    Mr. Lamb. Thank you, Madam Chairwoman. A couple of you have 
hit on some issues about AI as it relates to working people 
both in the hiring process, you know, discriminating against 
who they're going to hire and bias embedded in what they're 
doing, as well as the concerns about AI just displacing 
people's jobs. But I was wondering if any of you could go into 
a little more detail on AI in the existing workplace and how it 
might be used to control working people, to worsen their 
working conditions. I can envision artificial intelligence 
applications that could sort of interrupt nascent efforts to 
organize a workplace or maybe in an organized workplace a union 
that wants to bargain over the future of AI in that workplace 
but they're not able to access the sort of data to understand 
what it even is they're bargaining over.
    So I don't know if you can give any examples from present-
day where these types of things are already happening or just 
address what we can do to take on those problems as they evolve 
because I think they're going to come. Thank you.
    Ms. Whittaker. Thank you. Yes, I can provide a couple of 
examples, and I'll start by saying that, as my Co-Founder at AI 
Now Kate Crawford and the legal scholar Jason Schultz have 
pointed out, there are basically no protections for worker 
privacy. AI relies on data, and there are many companies and 
services currently offering to surveil and collect data on 
workers. And there are many companies that are now offering the 
capacity to analyze that data and make determinations based on 
that analysis. And a lot of the claims based on such analysis 
have no grounding in science. Things like, ``is a worker typing 
in a way that matches the data-profile of someone likely to 
quit?'' Whether or not typing style can predict attrition has 
not been tested or confirmed by any evidence, but nonetheless 
services are being sold to employers that claim to be able to 
make the connection. And that means that even though they're 
pseudoscientific, these claims are powerful. Managers and 
bosses are acting on such determinations, in ways that are 
shaping people's lives and livelihoods. And workers have no way 
to push back and contest such claims. We urgently need stronger 
worker privacy protections, standards that allow workers to 
contest the determinations made by such systems, and 
enforceable standards of scientific validation.
    I can provide a couple of examples of where we're seeing 
worker pushback against this kind of AI. Again, I mentioned the 
Amazon warehouse workers. We learned recently that Amazon uses 
a management algorithm in their warehouses. This algorithm 
tracks worker performance based on data from a sensor that 
workers are required to wear on their wrist, looking at how 
well workers are performing in relation to an algorithmically-
set performance rate. If a worker misses their rate, the 
algorithm can issue automatic performance warnings. And if a 
worker misses their rate too many times--say, they have to go 
to the bathroom, or deal with a family emergency--the algorithm 
can automatically terminate them. What becomes clear in 
examining Amazon's management algorithm, is that these are 
systems created by those in power, by employers, and designed 
to extract as much labor as possible out of workers, without 
giving them any possible recourse.
    We have also seen Uber drivers striking, around the time of 
the Uber IPO. In this case, they were protesting a similar 
technologically-enabled power imbalance, which manifested in 
Uber arbitrarily cutting their wages without any warning or 
explanation. Again, we see such tech being used by employers to 
increase power asymmetry between workers, and those at the top.
    A couple of years ago we saw the massive Virginia teachers 
strike. What wasn't widely reported was one of the reasons for 
this strike: the insistence by the school district that 
teachers wear health tracking devices as a condition of 
receiving health insurance. These devices collect extremely 
personal data, which is often processed and analyzed using AI.
    You've also seen students protesting AI-enabled education, 
from Brooklyn, to Kansas, and beyond. Many of these programs 
were marketed as breakthroughs that would enable personalized 
learning. What they actually did was force children to sit in 
front of screens all day, with little social interaction or 
personal attention from teachers.
    In short, we've seen many, many examples of people pushing 
back against the automation of management, and the unchecked 
centralized power that AI systems are providing employers, at 
the expense of workers.
    Finally, we've also seen tech workers at these companies 
organizing around many of these issues. I've been a part of a 
number of these organizing efforts, which are questioning the 
process by which such systems are created. Tech workers 
recognize the dangers of these technologies, and many are 
saying that they don't want to take part in building unethical 
systems that will be used to surveil and control. Tech workers 
know that we have almost no checks or oversight of these 
technologies, and are extremely concerned that they will be 
used for exploitation, extraction, and harm. There is mounting 
evidence that they are right to be concerned.
    Mr. Lamb. Thank you very much. I'm just going to ask one 
more question. I'm almost out of time. Ms. Tourassi--or, Dr. 
Tourassi, I'm sorry, I know that Oak Ridge has been a partner 
with the Veterans Health Administration, MVP-CHAMPION I think 
it's called, and if you could just talk a little bit about--is 
that project an example of the way that the VA can be a leader 
in AI as it relates to medicine, precision medicine? You know, 
we've got this seven-million veteran patient population, and in 
a number of IT areas we think of it as a leader that can help 
advance the field. Are you seeing that or are there more things 
we could be doing?
    Dr. Tourassi. The particular program you described, it's 
part of the Strategic Partnerships Program that brings the AI 
and high-performance computing expertise that exist within the 
DOE national lab system with the application domain and 
effectively the data owners as well. So that partnership is 
what's pushing the field forward in terms of developing 
technologies that we can deploy in the environment of the VA 
Administration to improve veterans' health care.
    I wouldn't consider the Veterans Administration as 
spearheading artificial intelligence, but, as I said in my 
written testimony, talent alone is not enough. You need to have 
the data, you have to--you need to have the compute resources, 
and you need to have talent. The two entities coming together, 
they create that perfect synergy to move the field forward.
    Mr. Lamb. Well, thank you for that. And I do believe that 
labs like yours and the efforts that we make in the VHA system 
are a way that we can help push back against the bias and 
discrimination in this field because the government really at 
its best has tried to be a leader in building a diverse 
workforce of all kinds and allowing workers at least in the 
Veterans Administration to organize and be part of this whole 
discussion, so hopefully we can keep moving that forward.
    Madam Chair, I yield back. Thank you.
    Chairwoman Johnson. Thank you very much. Mr. Sherman.
    Mr. Sherman. Thank you. I've been in Congress for about 23 
years, and in every Committee we focus on diversity, economic 
disruption, wages, and privacy. And we've dealt with that here 
today as well.
    I want to focus on something else that is more than a 
decade away, and that is that the most explosive power in the 
universe is intelligence. Two hundred thousand years ago or so 
our ancestors said hello to Neanderthal. It did not work out 
well for Neanderthal. That was the last time a new level of 
intelligence came to this planet, and it looks like we're going 
to see something similar again, only we are the Neanderthal.
    We have, in effect, two competing teams. We have the 
computer engineers represented here before us developing new 
levels of intelligence, and we have the genetic engineers quite 
capable in the decades to come of inventing a mammal with a 
brain--hundreds of pounds.
    So the issue before us today is whether our successor 
species will be carbon-based or silicon-based, whether the 
planet will be inherited by those with artificial intelligence 
or biologically engineered intelligence.
    There are those who say that we don't have to fear any 
computer because it doesn't have hands. It's in a box; it can't 
affect our world. Let me assure you that there are many in our 
species that would give hands to the devil in return for a good 
stock tip.
    The chief difference between the artificial intelligence 
and the genetically engineered intelligence is survival 
instinct. With DNA, it's programmed in. You try to kill a bug, 
it seems to want to survive. It has a survival instinct. And 
you can call it survival instinct; you could call it ambition. 
You go to turn off your washing machine or even the biggest 
computer that you've worked with, you go to unplug it, it 
doesn't seem to care.
    What amount of--what percentage of all the research being 
done on artificial intelligence is being used to detect and 
prevent self-awareness and ambition? Does anybody have an 
answer to that? Otherwise, I'll ask you to answer for the 
record. Yes, sir.
    Mr. Clark. We have an AI safety team at OpenAI, and a lot 
of that work is about--if I set an objective for a computer, it 
will probably solve that objective, but it will sometimes do--
solve that objective in a way that is incredibly harmful to 
people because, as other panelists have said, these algorithms 
are kind of dumb.
    Mr. Sherman. Right.
    Mr. Clark. What you can do is you can try and have these 
systems learn values from people.
    Mr. Sherman. Learning values is nice. What are you doing to 
prevent self-awareness and ambition?
    Mr. Clark. The idea is that if we encode the values that 
people have into these systems and so----
    Mr. Sherman. I don't want to be replaced by a really nice 
new form of intelligence. I'm looking for a tool that doesn't 
seek to affect the world.
    I want to move onto another issue, related though. I think 
you're familiar with the Turing test, which in the 1950s was 
proposed as the way we would know that computers had reached or 
exceeded human intelligence, and that is could you have a 
conversation with a computer and not know you're having a 
conversation with a computer? In this room in 2003 top experts 
of then predicted that the Turing test would be met by 2028. 
Does anybody here have a different view? Is that as good an 
estimate as any? They said it would be 25 years, and that was 
back in 2003.
    I'm not seeing anybody jump up with a different estimate, 
so I guess we have that one. You're not quite jumping up, but 
go ahead.
    Ms. Whittaker. I don't have an estimate on that. I do 
question the validity of the Turing test insofar as it relies 
on us to define what a human is, which is of course a 
philosophical question that we could debate for hours.
    Mr. Sherman. Well, I don't know about philosophers, but the 
law pretty well defines who's a human and who isn't and, of 
course, if we invent new kinds of sentient beings, the law will 
have to grow.
    I just want to add Mr. Beyer brought this up and was kind 
of dismissed by the idea that we shouldn't worry about a new 
level of intelligence since we, as of yet, don't have a 
computer that can drive a car without hitting a child. I think 
it's important that if we're going to have computers drive cars 
that they not hit children, but that's not a reason to dismiss 
the fact that between biological engineering and computer 
engineering, we are the Neanderthal creating our own Cro-
Magnon.
    I yield back.
    Chairwoman Johnson. Thank you very much. Ms. Horn.
    Ms. Horn. Thank you, Madam Chairwoman. And thank you to the 
panel for an important and interesting conversation today.
    I think it's clear that each time we, as society or as 
humans, experience a massive technological shift or 
advancement, it brings with it both opportunities and ways to 
make our life better or easier or move more smoothly and also 
challenges and dangers that are unknown to us in the 
development of that. And what I've heard from several of you 
today goes to the heart of this conversation, the need to 
balance the ethical, social, and legal implications with the 
technological advancement and the need to incorporate that from 
the beginning. So I want to address a couple of issues that Mx. 
Buolamwini--did I say that right?
    Mx. Buolamwini. Yes.
    Ms. Horn. OK. And Ms. Whittaker especially have addressed 
in turn. The first is the incorporation of bias into AI systems 
that we are looking at more and more in our workplaces. This 
isn't just a fun technological exercise. So, Mx. Buolamwini, in 
your testimony you talked about inequity when it's put into the 
algorithms and also the need to incorporate social sciences.
    So my question to you is how do we create a system that 
really addresses the groups that are most affected by this bias 
that could be built into the code and identifying it in the 
process? And then what would you suggest in terms of the 
ability to redress that, how to identify it and address it?
    Mx. Buolamwini. Absolutely. One thing I think we really 
need to focus on is how we define expertise, and who we 
consider the experts are generally not the people who are being 
impacted by these systems. So looking at ways we can actually 
work with marginalized communities during the design, 
development, deployment but also governance of these systems, 
so what--my community review panels that are part of the 
process, that are in the stakeholder meetings when you're doing 
things like algorithmic impact assessments and so forth, how do 
we actually bring people in.
    This is also why I suggested the public interest technology 
clinics, right, because you're asking about how do we get to 
redress? Well, you don't necessarily know how to redress the 
issue you never saw, right? If you are denied the job, you 
don't know. And so there needs to be a way where we actually 
give people ways of reporting or connecting.
    At the Algorithmic Justice League something we do is we 
have ``bias in the wild'' stories. This is how I began to learn 
about HireVue, which uses facial analysis and verbal and 
nonverbal cues to inform emotional engagement or problem-
solving style. We got this notification from somebody who had 
interviewed at a large tech company and only after the fact 
found out that AI was used in the system in the first place. 
This is something I've also asked the FTC (Federal Trade 
Commission) about in terms of who do you go to when something 
like this happens?
    Ms. Horn. Thank you very much. And, Ms. Whittaker, I want 
to turn to you. Several of the things that you have raised are 
concerning in a number of ways. And it strikes me that we're 
going to have to address this in a technological and social 
sciences setting but also as a legislative body and a Congress, 
setting some parameters around this that allow the development 
but also do our best to anticipate and guard for the problems, 
as you've mentioned.
    So my question to you is, what would you suggest the role 
or some potential solutions that Congress could consider to 
take into account the challenges in workplace use of AI?
    Ms. Whittaker. I want to emphasize my agreement with Mx. 
Buolamwini's answer. I will also point to the AI Now 
Institute's Algorithmic Impact Assessment Framework, which 
provides a multi-step process for governance. The first step 
involves reviewing the components that go into creating a given 
AI system: examining what data informs the system, how the 
system designed, and what incentives are driving the creation 
and deployment of the system. The second involves examining the 
context where the system is slated to be deployed, for instance 
examining a workplace algorithm to understand whether it's 
being used to extract more profit, whether it's being designed 
in ways that protect labor rights, and asking how we measure 
and assess such things. And the third and critical step is 
engaging with the communities on the ground, who will bear the 
consequences of exploitative and biased systems. These are the 
people who will ultimately know how a given system is working 
in practice. Engineers in a Silicon Valley office aren't going 
to have this information. They don't build these systems to 
collect such data. So it's imperative that oversight involve 
both technical and policy expertise, and on-the-ground 
expertise. And recognize that the experience of those on the 
ground is often more important than the theories and 
assumptions of those who design and deploy these systems.
    Ms. Horn. Thank you. My time is expired. I yield back.
    Chairwoman Johnson. Thank you very much. Ms. Stevens.
    Ms. Stevens. Thank you, Madam Chair. Artificial 
intelligence, societal and ethical implications, likely the 
most important hearing taking place in this body today with 
profound implications on our future and obviously our present-
day reality. Likely, the time we've allotted for this hearing 
is not enough. In fact, it might just be the beginning.
    We've referenced it before, our proverb behind us, ``Where 
there is no vision, the people will perish.'' And this is 
certainly an area where we need profound vision, a push toward 
the implications. And something that Mx. Buolamwini's statement 
in your testimony jumped out at me, which is that we have 
arrived overconfident and underprepared for artificial 
intelligence. And so I was wondering if each one of our 
panelists could talk about how we--not just as legislators are 
overconfident--in fact, I just think we're behind--but how we 
are underprepared. Thank you.
    Ms. Whittaker. Well, I think one of the reasons we're 
overconfident is, as I said in my opening statement, that a lot 
of what we learn about AI is marketing from companies who want 
to sell it to us. This kind of marketing creates a lot of hype, 
which manifests in claims that AI can solve complex social 
problems, that its use can produce almost magical efficiencies, 
that it can diagnose and even cure disease. And on and on.
    But we're unprepared to examine and validate these systems 
against these claims. We have no established, public mechanism 
for ensuring that this tech actually does what the companies 
selling it say it does. For the past two decades the tech 
industry has been allowed to basically regulate itself. We've 
allowed those in the business of selling technology to own the 
future, assuming that what's good for the tech industry is good 
for the future. And it's clear that this needs to end.
    In our 2018 annual report, AI Now recommended that truth in 
advertising laws be applied to AI technologies. All claims 
about AI's capabilities need to be validated and proven, and if 
you make a claim that can't be backed up, there will be 
penalties. The fact that such regulation would fundamentally 
change the way in which AI is designed and deployed should tell 
us something about how urgently it's needed.
    Mr. Clark. We're overconfident when it comes to believing 
these systems are repeatable and reliable. And as the 
testimonies have shown, that's repeatable for some, reliable 
for some. That's an area where people typically get stuff 
wrong.
    As a society, we're underprepared because we're under-
oriented. We don't know where this technology is going. We 
don't have granular data on how it's being developed. And the 
data that we do have is born out of industry, which has its own 
biases, so we need to build systems in government to let us 
measure, assess, and forecast for this technology.
    Mx. Buolamwini. First, I want to attribute Cathy O'Neil for 
we've arrived in the age of automation overconfident. I added 
underprepared because of all of the issues that I was seeing, 
and I do think part of the overconfidence is the assumption 
that good intentions will lead to a better outcome. And so 
oftentimes, I hear people saying, well, we want to use AI for 
good. And I ask do we even have good AI to begin with or are we 
sending parachutes with holes?
    When it comes to being underprepared, so much reliance on 
data is part of why I use the term data is destiny, right? And 
if our data is reflecting current power shadows, current 
inequalities, we're destined to fail those who have already 
been marginalized.
    Dr. Tourassi. So what we covered today was very nicely the 
hope, the hype, and the hard truth of AI. We covered every 
aspect. And actually this is not new. The AI technologies that 
existed in the 1990s, they went through the same wave. What's 
different now is that we're moving a lot more--a lot faster 
because of access to data and access to computer resources. And 
there is no doubt that we will produce code much faster than we 
can produce regulations and policies. This is the reality.
    Therefore, I believe that investments, strategic 
investments in R&D so that we can consistently and continuously 
benchmark datasets that are available for development of AI 
technology to capture biases to the extent that we can foresee 
these biases and continue to--continuously benchmark AI 
technology not only from the point of deployment but as a 
quality control throughout its lifetime, that needs to be part 
of our approach to the problem.
    Ms. Stevens. Well, thank you so much. And for the record, I 
just wanted to make note that earlier this year in this 116th 
Congress, I had the privilege of joining my colleague from 
Michigan, Congresswoman Brenda Lawrence, and our other 
colleague, Congressman Ro Khanna, to introduce H.R. 153, which 
supports the development of guidelines for the ethical 
development of artificial intelligence. So it's a resolution, 
but it's a step in that direction.
    And certainly as this Committee continues to work with the 
National Institute of Standards and Technology and all of your 
fabulous expertise, we'll hopefully get to a good place. Thank 
you.
    I yield back, Madam Chair.
    Chairwoman Johnson. Thank you very much.
    That concludes our questioning period. And I want to remind 
our witnesses that the record will remain open for 2 weeks for 
any additional statements from you or Members or any additional 
questions of the Committee.
    The witnesses are now excused. I thank you profoundly for 
being here today. And the hearing is adjourned.
    [Whereupon, at 12:03 p.m., the Committee was adjourned.]

                               Appendix I

                              ----------                              

[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]

                              Appendix II

[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]                 

                                  [all]