[House Report 116-268]
[From the U.S. Government Publishing Office]
116th Congress } { Report
HOUSE OF REPRESENTATIVES
1st Session } { 116-268
======================================================================
IDENTIFYING OUTPUTS OF GENERATIVE ADVERSARIAL NETWORKS ACT
_______
November 5, 2019.--Committed to the Committee of the Whole House on the
State of the Union and ordered to be printed
_______
Ms. Johnson of Texas, from the Committee on Science, Space, and
Technology, submitted the following
R E P O R T
[To accompany H.R. 4355]
[Including cost estimate of the Congressional Budget Office]
The Committee on Science, Space, and Technology, to whom
was referred the bill (H.R. 4355) to direct the Director of the
National Science Foundation to support research on the outputs
that may be generated by generative adversarial networks,
otherwise known as deepfakes, and other comparable techniques
that may be developed in the future, and for other purposes,
having considered the same, report favorably thereon with an
amendment and recommend that the bill as amended do pass.
CONTENTS
Page
I. Amendment.......................................................2
II. Purpose of the Bill.............................................3
III. Background and Need for the Legislation.........................3
IV. Committee Hearings..............................................4
V. Committee Consideration and Votes...............................4
VI. Summary of Major Provisions of the Bill.........................4
VII. Section-By-Section Analysis (By Title and Section)..............5
VIII. Committee Views.................................................6
IX. Cost Estimate...................................................6
X. Congressional Budget Office Cost Estimate.......................6
XI. Compliance with Public Law 104-4 (Unfunded Mandates)............7
XII. Committee Oversight Findings and Recommendations................7
XIII. Statement on General Performance Goals and Objectives...........7
XIV. Federal Advisory Committee Statement............................8
XV. Duplication of Federal Programs.................................8
XVI. Earmark Identification..........................................8
XVII. Applicability to the Legislative Branch.........................8
XVIII.Statement on Preemption of State, Local, or Tribal Law..........8
XIX. Changes in Existing Law Made by the Bill, As Reported...........8
XX. Proceedings of Full Committee Markup............................8
The amendment is as follows:
Strike all after the enacting clause and insert the
following:
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Identifying Outputs of Generative
Adversarial Networks Act'' or the ``IOGAN Act''.
SEC. 2. FINDINGS.
Congress finds the following:
(1) Research gaps currently exist on the underlying
technology needed to develop tools to identify authentic
videos, voice reproduction, or photos from manipulated or
synthesized content, including those generated by generative
adversarial networks.
(2) The National Science Foundation's focus to support
research in artificial intelligence through computer and
information science and engineering, cognitive science and
psychology, economics and game theory, control theory,
linguistics, mathematics, and philosophy, is building a better
understanding of how new technologies are shaping the society
and economy of the United States.
(3) The National Science Foundation has identified the ``10
Big Ideas for NSF Future Investment'' including ``Harnessing
the Data Revolution'' and the ``Future of Work at the Human-
Technology Frontier'', in with artificial intelligence is a
critical component.
(4) The outputs generated by generative adversarial networks
should be included under the umbrella of research described in
paragraph (3) given the grave national security and societal
impact potential of such networks.
(5) Generative adversarial networks are not likely to be
utilized as the sole technique of artificial intelligence or
machine learning capable of creating credible deepfakes and
other comparable techniques may be developed in the future to
produce similar outputs.
SEC. 3. NSF SUPPORT OF RESEARCH ON MANIPULATED OR SYNTHESIZED CONTENT
AND INFORMATION SECURITY.
The Director of the National Science Foundation, in consultation with
other relevant Federal agencies, shall support merit-reviewed and
competitively awarded research on manipulated or synthesized content
and information authenticity, which may include--
(1) fundamental research on digital forensic tools or other
technologies for verifying the authenticity of information and
detection of manipulated or synthesized content, including
content generated by generative adversarial networks;
(2) fundamental research on technical tools for identifying
manipulated or synthesized content, such as watermarking
systems for generated media;
(3) social and behavioral research related to manipulated or
synthesized content, including the ethics of the technology and
human engagement with the content;
(4) research on public understanding and awareness of
manipulated and synthesized content, including research on best
practices for educating the public to discern authenticity of
digital content; and
(5) research awards coordinated with other federal agencies
and programs including the Networking and Information
Technology Research and Development Program, the Defense
Advanced Research Projects Agency and the Intelligence Advanced
Research Projects Agency.
SEC. 4. NIST SUPPORT FOR RESEARCH AND STANDARDS ON GENERATIVE
ADVERSARIAL NETWORKS.
(a) In General.--The Director of the National Institute of Standards
and Technology shall support research for the development of
measurements and standards necessary to accelerate the development of
the technological tools to examine the function and outputs of
generative adversarial networks or other technologies that synthesize
or manipulate content.
(b) Outreach.--The Director of the National Institute of Standards
and Technology shall conduct outreach--
(1) to receive input from private, public, and academic
stakeholders on fundamental measurements and standards research
necessary to examine the function and outputs of generative
adversarial networks; and
(2) to consider the feasibility of an ongoing public and
private sector engagement to develop voluntary standards for
the function and outputs of generative adversarial networks or
other technologies that synthesize or manipulate content.
SEC. 5. REPORT ON FEASIBILITY OF PUBLIC-PRIVATE PARTNERSHIP TO DETECT
MANIPULATED OR SYNTHESIZED CONTENT.
Not later than one year after the date of the enactment of this Act,
the Director of the National Science Foundation and the Director of the
National Institute of Standards and Technology shall jointly submit to
the Committee on Space, Science, and Technology of the House of
Representatives and the Committee on Commerce, Science, and
Transportation a report containing--
(1) the Directors' findings with respect to the feasibility
for research opportunities with the private sector, including
digital media companies to detect the function and outputs of
generative adversarial networks or other technologies that
synthesize or manipulate content; and
(2) any policy recommendations of the Directors that could
facilitate and improve communication and coordination between
the private sector, the National Science Foundation, and
relevant Federal agencies through the implementation of
innovative approaches to detect digital content produced by
generative adversarial networks or other technologies that
synthesize or manipulate content.
SEC. 6. GENERATIVE ADVERSARIAL NETWORK DEFINED.
In this Act, the term ``generative adversarial network'' means, with
respect to artificial intelligence, the machine learning process of
attempting to cause a generator artificial neural network (referred to
in this paragraph as the ``generator'' and a discriminator artificial
neural network (referred to in this paragraph as a ``discriminator'')
to compete against each other to become more accurate in their function
and outputs, through which the generator and discriminator create a
feedback loop, causing the generator to produce increasingly higher-
quality artificial outputs and the discriminator to increasingly
improve in detecting such artificial outputs.
II. Purpose of the Bill
The purpose of the bill is to provide for research on
manipulated or synthesized content and information
authenticity, including output of generative adversarial
networks, otherwise known as deepfakes and to encourage public-
private partnerships to develop standards for detecting and
identifying such content.
III. Background and Need for the Legislation
Disinformation in its many forms has long been used by
governments and rogue organizations and individuals as a weapon
against adversaries. The problem has become more pervasive in
the past decade with the explosive growth of social media,
which provides an opportunity for hostile actors to project
disinformation directly into the popular discourse at little
cost.
Advancements in computing power and the widespread use of
artificial intelligence over the past several years have made
it easier and cheaper than ever before to manipulate and
reproduce photographs and video and audio clips potentially
harmful or deceptive to the American public and to the
integrity of our democratic institutions and processes,
including fake videos featuring ``people'' who do not really
exist. AI programs can also write convincing articles and blog
posts that seem to be written by real humans. This technology,
often referred to as ``deepfake technology'' has developed
rapidly over the past several years with no clear method of
identifying and stopping it from becoming a major threat to
national security, economic security, or public health. The
ability to identify and label this content is critical to
preventing bad actors from using manipulated images and videos
to shift U.S. public opinion. While the deep fake technology
continues to mature, researchers are only beginning to develop
the knowledge and tools that will help the public and private
sector distinguish authentic content from manipulated or
synthesized content.
IV. Committee Hearings
On September 26, 2019, the Investigations and Oversight
Subcommittee held a hearing entitled, ``Online Imposters and
Disinformation.'' The purpose of the hearing was to explore the
enabling technologies for disinformation online, including deep
fakes, explore trends and emerging technology in the field, and
consider research strategies that can help stem the tide of
malicious inauthentic behavior. The hearing featured a
demonstration of a deep fake video created using the words and
video of two Members of Congress.
Three witnesses testified: Dr. Hany Farid, Professor,
Electrical Engineering & Computer Science and the School of
Information, University of California, Berkeley; Dr. Siwei Lyu,
Professor, Department of Computer Science, Director, Computer
Vision and Machine Learning Lab, University at Albany, State
University of New York; and Ms. Camille Francois; Chief
Innovation Officer, Graphika.
V. Committee Consideration and Votes
On September 17, 2019 Rep. Anthony Gonzalez and Rep. Haley
Stevens, as well as Rep. Jim Baird and Rep. Katie Hill,
introduced H.R. 4355, the Identifying Outputs of Generative
Adversarial Networks Act. The bill was referred to the House
Science, Space, and Technology Committee.
On September 25, 2019, the Committee met to consider H.R.
4355. Mr. Gonzalez offered an amendment in the nature of a
substitute to make technical corrections and conforming
changes. The amendment was agreed to by voice vote. Mr. Beyer
introduced an amendment to the amendment to include fundamental
research on technical tools for identifying manipulated or
synthesized content, such as watermarking systems for generated
media. The amendment was agreed to by voice vote. Ms. Wexton
introduced an amendment to the amendment to include research on
public understanding and awareness of manipulated or
synthesized content, including research on best practices. The
amendment was agreed to by voice vote. Ms. Johnson moved that
the Committee favorably report the bill, H.R. 4355, to the
House with the recommendation that the bill be approved. The
motion was agreed to by voice vote.
VI. Summary of Major Provisions of the Bill
The Act directs the National Science Foundation (NSF) to
support research on manipulated or synthesized content and
information security, including fundamental research on digital
media forensic tools, social and behavioral research, and
research awards coordinated with other federal agencies and
programs including NITRD, DARPA and IARPA.
The Act directs the National Institute of Standards and
Technology (NIST) to support research for the development of
measurements and standards necessary to accelerate the
development of technological tools to examine the function and
outputs of generative adversarial networks and other
technologies that synthesize or manipulate content.
Further the Act directs NSF and NIST to jointly submit to
Congress a report on the feasibility of and policy
recommendations for a public-private partnership for research
to detect manipulated or synthesized content.
VII. Section-by-Section Analysis (By Title and Section)
Section 1. Short title
Identifying Outputs of Generative Adversarial Networks Act
or the IOGAN Act.
Section 2. Findings
Provides findings for the Act that there are research gaps
on the underlying technology needed to develop tools to
identify authentic videos, voice reproduction, or photos and
those generated by generative adversarial networks (otherwise
known as ``deepfakes''), and that there is a role for the NSF
in conducting research on these gaps including social and
behavioral research.
Section 3. NSF support of research on manipulated or synthesized
content and information security
Directs the National Science Foundation, in consultation
with other Federal agencies, to conduct research on manipulated
or synthesized content and information authenticity, including
fundamental research on digital forensic tools and social and
behavioral research on the ethics of the technology and human
engagement with the content.
Section 4. NIST support for research and standards on generative
adversarial networks
Directs the National Institute of Standards and Technology
to support research for the development of measurements and
standards necessary to accelerate the development of the
technological tools to examine the function and outputs of
generative adversarial networks and other technologies that
synthesize or manipulate content; Directs NIST to solicit input
from private, public, and academic stakeholders; Directs NIST
to consider the feasibility of an ongoing public and private
sector engagement to develop voluntary standards for the
outputs of generative adversarial networks and other
technologies.
Section 5. Report on feasibility of public-private partnership to
detect manipulated or synthesized content
Directs NSF and NIST to jointly submit to Congress a report
on opportunities for research partnerships with the private
sector on generative adversarial networks or other technologies
that synthesize or manipulate content.
Section 6. Generative adversarial network defined
Provides a definition for ``generative adversarial
network''.
VIII. Committee Views
The intent of this legislation is to accelerate the
progress of research and the development of measurements,
standards, and tools to combat manipulated media content,
including the outputs of generative adversarial networks,
commonly called ``deepfakes.''
The Committee recognizes that NSF is already making
investments in the area of manipulated or synthesized content
through its Secure and Trustworthy Cyberspace (SaTC) and Robust
Intelligence (RI) programs. The Committee encourages NSF to
continue to fund cross-directorate research through these
programs and others to achieve the purposes of this Act,
including social and behavioral research on the ethics of these
technologies and human interaction with the content generated
by these technologies.
The Committee intends for NSF and NIST, in carrying out
this Act, to work with other agencies conducting work on
detecting manipulated and synthesized content, including DARPA,
IARPA and the agencies that participate in the NITRD program,
to ensure coordination and avoid duplication of effort.
IX. Cost Estimate
Pursuant to clause 3(c)(2) of rule XIII of the Rules of the
House of Representatives, the Committee adopts as its own the
estimate of new budget authority, entitlement authority, or tax
expenditures or revenues contained in the cost estimate
prepared by the Director of the Congressional Budget Office
pursuant to section 402 of the Congressional Budget Act of
1974.
X. Congressional Budget Office Cost Estimate
U.S. Congress,
Congressional Budget Office,
Washington, DC, October 29, 2019.
Hon. Eddie Bernice Johnson,
Chairwoman, Committee on Science, Space, and Technology, House of
Representatives, Washington, DC.
Dear Madam Chairwoman: The Congressional Budget Office has
prepared the enclosed cost estimate for H.R. 4355, the
Identifying Outputs of Generative Adversarial Networks Act.
If you wish further details on this estimate, we will be
pleased to provide them. The CBO staff contact is Janani
Shankaran.
Sincerely,
Phillip L. Swagel,
Director.
Enclosure.
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
H.R. 4355 would require the National Science Foundation
(NSF) to support research on manipulated digital content and
information authenticity. The bill also would direct the
National Institute of Standards and Technology (NIST) to create
measurements and standards for the development of technological
tools that examine generative adversarial networks (GANs),
which are used to produce manipulated content.
Using information from the NSF, CBO estimates that
implementing the bill would have no significant cost for the
NSF because the agency is already carrying out the required
activities through its existing grant programs. Using
information from NIST, CBO estimates that the agency would
require 10 additional employees at an average annual cost of
$175,000 each over the 2020-2022 period to establish a research
program on GANs and similar technologies. The bill also would
direct NIST and the NSF to report to the Congress on related
policy recommendations. Based on the costs of similar tasks,
CBO estimates that developing the report would cost less than
$500,000. In total, CBO estimates that implementing H.R. 4355
would cost $6 million over the 2020-2024 period; such spending
would be subject to the availability of appropriated funds.
The CBO staff contacts for this estimate are Janani
Shankaran and David Hughes. The estimate was reviewed by H.
Samuel Papenfuss, Deputy Assistant Director for Budget
Analysis.
XI. Federal Mandates Statement
H.R. 4355 contains no unfunded mandates.
XII. Committee Oversight Findings and Recommendations
The Committee's oversight findings and recommendations are
reflected in the body of this report.
XIII. Statement on General Performance Goals and Objectives
The goal of this legislation is to support research and
development on technical and other tools to assist the public
and private sectors in identifying manipulated and synthesized
content online.
XIV. Federal Advisory Committee Statement
H.R. 4355 does not create any advisory committees.
XV. Duplication of Federal Programs
Pursuant to clause 3(c)(5) of rule XIII of the Rules of the
House of Representatives, the Committee finds that no provision
of H.R. 4355 establishes or reauthorizes a program of the
federal government known to be duplicative of another federal
program, including any program that was included in a report to
Congress pursuant to section 21 of Public Law 111-139 or the
most recent Catalog of Federal Domestic Assistance.
XVI. Earmark Identification
Pursuant to clause 9(e), 9(f), and 9(g) of rule XXI, the
Committee finds that H.R. 4355 contains no earmarks, limited
tax benefits, or limited tariff benefits.
XVII. Applicability to the Legislative Branch
The Committee finds that H.R. 4355 does not relate to the
terms and conditions of employment or access to public services
or accommodations within the meaning of section 102(b)(3) of
the Congressional Accountability Act (Public Law 104-1).
XVIII. Statement on Preemption of State, Local, or Tribal Law
This bill is not intended to preempt any state, local, or
tribal law.
XIX. Changes in Existing Law Made by the Bill, as Reported
This legislation does not amend any existing Federal
statute.
XX. Proceedings of the Full Committee Markup
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]