Artificial intelligence systems are increasingly replicating historical patterns of discrimination and the government’s got to do something about it quick, technology experts told Congress Wednesday.
To combat the perpetuation, they said lawmakers must boost funding for interdisciplinary research and work to ensure social scientists are embedded with technical teams from the inception of America’s AI projects.
“We should think about ways that we can study AI that brings together computer scientists, lawyers, social scientists, philosophers, security experts and more—not just 20 computer science professionals and a single lawyer, which is some people’s definition of interdisciplinary research,” Jack Clark, policy director at the research organization OpenAI said at a House Science, Space and Technology Committee hearing.
Clark and his fellow panelists told legislators that the majority of AI systems deployed today are developed by a small number of people from homogeneous backgrounds (mostly white and male) and grants are not particularly friendly to large-scale interdisciplinary research projects in the space. Yet a lot of the projects’ outcomes incorporate or encompass specific values.
“Technologists are not great at understanding human values, but social scientists are and have tools to help us understand them,” Clark said. “So my pitch is to have federally funded centers of excellence where you bring social scientists together with technologists to work on applied things.”
With over a decade of experience working in the AI industry, Meredith Whittaker leads Google’s Open Research Group and co-founded New York University’s AI Now Institute, a research center dedicated to understanding the social implications of artificial intelligence. She’s witnessed firsthand how the AI industry is “profoundly concentrated and controlled” by just a handful of companies that are “notoriously non-diverse.”
“It is urgent Congress work to ensure AI is accountable, fair and just because this is not what is happening right now,” Whittaker said.
She highlighted some of the severe consequences strung from uniform groups of scientists—perhaps unintentionally—encoding their bias in AI technologies.