The rapidly evolving field of artificial intelligence is fueling fears that it’s developing more quickly than its effects can be understood.
The use of generative AI — systems that create new content such as text, photos, videos, music, code, speech and art — dramatically increased after the emergence of tools such as ChatGPT. Although these tools bring many benefits, they also can be misused in harmful ways.
To manage this risk, the White House secured agreements from seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — to commit to safety practices in developing AI technology.
The White House announcement came with its own terminology that may be unfamiliar to the average person, phrases and words such as “red teaming” and “watermarking.” Here, we define seven terms, starting with the building blocks of the technology and ending with some of the tools companies are using to make AI safer.