We accept submissions until 16 January 2026. We review applications on a rolling basis and encourage early submissions.
The opportunityJoin our new AGI safety monitoring team and help transform complex AI research into practical tools that reduce risks from AI. As a Full Stack Engineer, you'll work closely with our CEO, monitoring engineers and evals team software engineers to build tools that make AI agent safety accessible at scale. You will join a small team and will have significant ability to shape the team & tech, and have the ability to earn responsibility quickly.
You will like this opportunity if you care about building tools that genuinely make AI agents safer and you thrive in high paced environments and enjoy closely working with researchers.
Key responsibilitiesTool development
Back end development
Front end development
Collaboration & communication
AI agent real time monitoring system. AI agents are already deployed at scale. Often they are not or only barely monitored, missing critical failures. The natural response is to build monitors that constantly scale agent outputs and alert developers and/or security teams about potential risks. We will cover hundreds of failure modes in AI safety & security and build out many kinds of monitors (e.g. hierarchical, ensembles, agentic).
BenefitsThe monitoring team is new. Early on you'll work closely with Marius Hobbhahn (CEO), Jeremy Neiman (engineer) and others on the monitoring team. You'll also sometimes work with SWEs, Rusheb Shah, Andrei Matveiakin, Alex Kedrik and Glen Rodgers to translate internal tools into externally usable tools. You will interact with researchers, since we intend to be "our own customer" by using tools internally for research work.
About ApolloThe rapid rise in AI capabilities offer tremendous opportunities, but also present significant risks. At Apollo Research we're primarily concerned with risks from Loss of Control, i.e. risks coming from the model itself rather than humans misusing the AI. We are concerned with deceptive alignment / scheming, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight. We work on detection of scheming, the science of scheming (e.g. model organisms) and scheming mitigations (anti scheming and control). We closely work with multiple frontier AI companies, e.g. to test their models before deployment or collaborate on scheming mitigations.
We are also developing tools that make it easier to prevent harms from AI systems widely deployed. We specifically target coding agent safety since coding agents are the most advanced agents and tasked with high stakes decisions.
At Apollo we aim for a culture that emphasises truth seeking, being goal oriented, giving and receiving constructive feedback and being friendly and helpful. If you're interested in more details about what it's like working at Apollo, you can find more information here.
Equality statementApollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex or sexual orientation.
How to applyPlease complete the application form with your CV. The provision of a cover letter is neither required nor encouraged. Please also feel free to share links to relevant work samples.
Interview processOur multi stage process includes a screening interview, a take home test (approx. 3 hours), 3 technical interviews and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no leetcode style general coding interviews. If you want to prepare, we suggest getting familiar with the evaluation framework Inspect or by building simple monitors for coding agents and running them on your own Claude Code / Cursor / Codex / etc. traffic.
Your privacy and fairness in our recruitment processWe are committed to protecting your data, ensuring fairness and adhering to workplace fairness principles in our recruitment process. To enhance hiring efficiency we use AI powered tools to assist with screening. These tools are designed and deployed in compliance with internationally recognised AI governance frameworks. Your personal data is handled securely and transparently. All resumes are screened by a human and final hiring decisions are made by our team. If you have questions about how your data is processed or wish to report concerns about fairness, please contact us at .