X
Innovation

NIST launches a program to benchmark generative AI tech and identify AI-generated content

The program will try to figure out whether content is generated by AI or created by humans, and promote AI tech that is a net good for society.
Written by Don Reisinger, Contributing Writer
charts-graphs-lecture
Laurence Dutton/Getty Images

The US Commerce Department's National Institute of Standards and Technology (NIST) has launched a new program that aims to analyze generative AI and determine how the US can safely use the tech without being duped by AI-generated content.

The Commerce Department's division for analyzing and assessing technology opportunities and risks is calling its new program NIST GenAI. The agency plans to create GenAI benchmarks, identify ways to determine whether content is human- or AI-generated, and ultimately promote the development of AI technologies that will be good for society as a whole. NIST didn't say which AI technologies it'll be evaluating, but it did say that it plans to tackle AI models developed "around the world."

"The pilot study aims to measure and understand system behavior for discriminating between synthetic and human-generated content in the text-to-text (T2T) and text-to-image (T2I) modalities," the agency said in a statement. "This pilot addresses the research question of how human content differs from synthetic content, and how the evaluation findings can guide users in differentiating between the two."

Also: What is generative AI and why is it so popular? Here's everything you need to know

The NIST announcement comes six months after US President Joe Biden issued an executive order requiring all US government entities to develop safe and secure AI strategies. NIST said that in that time, it's developed four AI publications, including for mitigating AI risks and creating global AI standards, that can be used by governments and corporations globally.

NIST GenAI is meanwhile especially focused at the onset of synthetic AI, or images and content created by AI models instead of humans. NIST is allowing teams of researchers to apply to join NIST GenAI evaluations as either a "generator" or a "discriminator." The former group will be tasked with testing how well their AI systems can "generate synthetic content that is indistinguishable from human-produced content." The discriminator groups will be "tested on their system's ability to detect synthetic content created by large language models (LLMs) and generative AI models."

NIST's efforts at detecting synthetic AI are also focused on how such content could affect patents, human invention, and industries globally. The agency is concerned about whether AI-generated content can be, or even should be, patented and protected by US intellectual property law.

"As AI assumes a larger role in innovation, we must encourage the responsible and safe use of AI to solve local and world problems and to develop the jobs and industries of the future, while ensuring AI does not derail the critical role IP plays in incentivizing human ingenuity and investment," Under Secretary of Commerce for Intellectual Property Kathi Vidal said in a statement.

NIST plans to roll out NIST GenAi in stages. The agency said that it will begin accepting team registrations into the program in May 2024 and conclude its first round of testing in August 2024. The agency didn't say what other AI-related issues it'll tackle in subsequent rounds.

Editorial standards