About the role
Build, evaluate, and harden the AI systems at the core of NebulaForge's security platform. You will work on LLM-powered agents, RAG pipelines, and adversarial AI testing frameworks that protect our customers.
Responsibilities
- Design and implement multi-agent AI systems using Azure AI Foundry
- Build adversarial testing pipelines for AI red-teaming
- Evaluate and mitigate prompt injection vulnerabilities in production agents
- Collaborate with product on AI feature development and safety controls
- Contribute to NebulaForge's AI security research and publications
What you'll bring
- PhD or 5+ years experience in ML/AI engineering
- Production experience with LLMs and Azure OpenAI
- Proficiency in Python, PyTorch, and ML experimentation frameworks
- Experience with Azure AI Foundry or equivalent MLOps platforms
- Understanding of AI security risks (prompt injection, data poisoning, model extraction)
- Published research or open-source contributions (preferred)
Benefits
✦ Flexible remote work policy
✦ Research publication support
✦ GPU compute credits for personal projects
✦ Conference speaking opportunities
✦ Significant equity