Prompt Engineer
Coaxes good answers out of large language models, professionally.
Stops the model from doing something the lawyers will hate.
Trust & safety folks in AI work on what the model SHOULDN'T do โ generating harmful content, leaking private info, helping with abuse, hallucinating dangerous advice. They build red-teaming programs, design safety classifiers, and work with policy and legal.
Reviewing red-team findings, designing a new safety eval, working with legal on a policy update, and triaging a new abuse report.
You think about edge cases obsessively and you care a lot about doing this stuff right.
We'll send you to a fresh search of open AI Trust & Safety Lead roles.
browse open AI Trust & Safety Lead jobs โ