Researcher, Alignment Oversight
Openai
San FranciscoFull-time2d ago
About the role
ABOUT THE TEAM
The Alignment Oversight team at OpenAI develops techniques for improving control, accountability, and alignment as AI systems become more capable and agentic.
We combine longer-horizon research with hands-on deployment. We study long-term questions about how increasingly intelligent systems can be supervised, constrained, and corrected, while also building oversight systems that are used in practice today, both internally and externally (see our recent work on code review https://alignment.openai.com/scaling-code-verification/ and action monitoring for codex https://alignment.openai.com/auto-review/).
We also study how to learn from real-world deployments: using oversight data and human interventions to train future models to be more aligned, while preserving the effectiveness and independence of the oversight systems themselves.
ABOUT THE ROLE
As a researcher on the Alignment team, you will design and run experiments that improve our ability to oversee increasing
More at Openai
- Finance Manager - Hardware ConsolidationsSan Francisco · $234k – $260k
- Protective Intelligence & Threat AnalystSan Francisco
- Software Engineer, Hardware HealthSan Francisco
- Senior Counsel, Public SectorWashington, DC
- Executive Transformation Program Lead, Embedded ExperienceSan Francisco
- Software Engineer, Productivity - Inference RuntimeSan Francisco