About the company
OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. Our Communications team is composed of PR/Media Relations, Events, Design, and other external-facing functions. The team’s ethos is to support OpenAI's mission and goals by clearly and authentically explaining our technology, values, and approach to safely building powerful AI. The Events team is a dynamic group dedicated to crafting extraordinary experiences that encompass our company's values and mission. Our team is driven by a passion for bringing people together to connect in meaningful ways.
Job Summary
In this role, you'll:
📍Work on identifying emerging AI safety risks and new methodologies for exploring the impact of these risks 📍Build (and then continuously refine) evaluations of frontier AI models that assess the extent of identified risks 📍Design and build scalable systems and processes that can support these kinds of evaluations 📍Contribute to the refinement of risk management and the overall development of "best practice" guidelines for AI safety evaluations
We expect you to be:
📍Passionate and knowledgeable about short-term and long-term AI safety risks 📍Able to think outside the box and have a robust “red-teaming mindset” 📍Experienced in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, and/or another technical domain applicable to AI risk 📍Able to operate effectively in a dynamic and extremely fast-paced research environment as well as scope and deliver projects end-to-end
If this role isn’t the perfect fit, there are plenty of exciting opportunities in blockchain technology, cryptocurrency startups, and remote crypto jobs to explore. Check them on our Jobs Board.