Researcher, Alignment Science
Openai
San FranciscoFull-time5d ago
About the role
ABOUT THE TEAM
The Alignment Science team at OpenAI studies the science of intent alignment: how to train models to understand what users are actually asking for, act faithfully on that intent while respecting safety constraints, verify what they did, and report their limitations honestly. Our work sits alongside broader value alignment efforts, but this team focuses on scalable methods for ensuring instruction-following, honesty, and robustness as models become more capable.
We work on both sides of alignment research: producing externally publishable results and integrating promising techniques into the models OpenAI deploys. Recent team research on model confessions studies how models can be trained to honestly report shortcomings after their original answer, including failures involving hallucination, instruction following, scheming, and reward hacking. That work reflects a broader agenda: build scalable and general methods to ensure models follow human intent.
The team uses a m
More at Openai
- Performance & Systems Engineer, CodexSan Francisco
- Product Manager, Financial EngineeringSan Francisco
- Senior Global Extended Workforce Compliance LeadSan Francisco
- Manufacturing Quality Engineer, Datacenter Infrastructure - StargateSan Francisco
- Product Manager, Premium SubscriptionsSan Francisco
- Solutions Engineering, Ads SolutionsNew York City