Researcher, Alignment Training
Openai
San FranciscoFull-time3d ago
About the role
About The Team
The Alignment Training team studies how frontier models acquire durable behavioral tendencies across the training stack. We work on identifying which behaviors can be shaped through pre-training, mid-training, and post-training; building the data, objectives, and evaluations needed to influence them; and determining whether the resulting behavior reflects a general learned tendency or a narrow artifact of the training distribution.
Our work spans synthetic data, pre-training, mid-training, post-training, model behavior, and evaluation. We study how models learn to interpret intent, follow instructions, reason through tasks, express uncertainty, act honestly, and remain reliable under new conditions. The goal is to make desirable tendencies emerge early, strengthen throughout training, and appear robustly in deployed systems.
About The Role
We’re looking for a senior researcher with exceptional technical depth in large-scale model training, synthetic data, or evalu
More at Openai
- Performance & Systems Engineer, CodexSan Francisco
- Product Manager, Financial EngineeringSan Francisco
- Senior Global Extended Workforce Compliance LeadSan Francisco
- Manufacturing Quality Engineer, Datacenter Infrastructure - StargateSan Francisco
- Product Manager, Premium SubscriptionsSan Francisco
- Solutions Engineering, Ads SolutionsNew York City