07:45 GMT - Saturday, 01 February, 2025

MLCommons and Hugging Face team up to release massive speech dataset for AI research

Home - Artificial Intelligence - MLCommons and Hugging Face team up to release massive speech dataset for AI research

Share Now:


MLCommons, a nonprofit AI safety working group, has teamed up with AI dev platform Hugging Face to release one of the world’s largest collections of public domain voice recordings for AI research.

The dataset, called Unsupervised People’s Speech, contains more than a million hours of audio spanning at least 89 languages. MLCommons says it was motivated to create it by a desire to support R&D in “various areas of speech technology.”

“Supporting broader natural language processing research for languages other than English helps bring communication technologies to more people globally,” the organization wrote in a blog post Thursday. “We anticipate several avenues for the research community to continue to build and develop, especially in the areas of improving low-resource language speech models, enhanced speech recognition across different accents and dialects, and novel applications in speech synthesis.”

It’s an admirable goal, to be sure. But AI datasets like Unsupervised People’s Speech can carry risks for the researchers who choose to use them.

Biased data is one of those risks. The recordings in Unsupervised People’s Speech came from Archive.org, the nonprofit perhaps best known for the Wayback Machine web archival tool. Because many of Archive.org’s contributors are English-speaking — and American — almost all of the recordings in Unsupervised People’s Speech are in American-accented English, per the readme on the official project page.

That means that, without careful filtering, AI systems like speech recognition and voice synthesizer models trained on Unsupervised People’s Speech could exhibit some of the same prejudices. They might, for example, struggle to transcribe English spoken by a non-native speaker, or have trouble generating synthetic voices in languages other than English.

Unsupervised People’s Speech might also contain recordings from people unaware that their voices are being used for AI research purposes — including commercial applications. While MLCommons says that all recordings in the dataset are public domain or available under Creative Commons licenses, there’s the possibility mistakes were made.

According to an MIT analysis, hundreds of publicly available AI training datasets lack licensing information and contain errors. Creator advocates including Ed Newton-Rex, the CEO of AI ethics-focused nonprofit Fairly Trained, have made the case that creators shouldn’t be required to “opt out” of AI datasets because of the onerous burden opting out imposes on these creators.

“Many creators (e.g. Squarespace users) have no meaningful way of opting out,” Newton-Rex wrote in a post on X last June. “For creators who can opt out, there are multiple overlapping opt-out methods, which are (1) incredibly confusing and (2) woefully incomplete in their coverage. Even if a perfect universal opt-out existed, it would be hugely unfair to put the opt-out burden on creators, given that generative AI uses their work to compete with them — many would simply not realize they could opt out.”

MLCommons says that it’s committed to updating, maintaining, and improving the quality of Unsupervised People’s Speech. But given the potential flaws, it’d behoove developers to exercise serious caution.

Highlighted Articles

Add a Comment

Stay Connected

Please enable JavaScript in your browser to complete this form.