Offensive Security Analyst (Structured / Non-Exploit)
Alignerr
Offensive Security Analyst (Structured / Non-Exploit) — AI Training
About The Role
What if your offensive security expertise could directly shape how AI understands cyber threats, adversary behavior, and real-world attack chains? We're looking for experienced security professionals to analyze and model realistic attack scenarios — helping train and evaluate the AI systems that will define the next generation of cybersecurity reasoning.
This is a fully remote, flexible contract role. No exploit development required — just deep, practical knowledge of how real attacks unfold and a sharp ability to articulate what defenders miss.
- Organization: Alignerr
- Type: Hourly Contract
- Location: Remote
- Commitment: 10–40 hours/week
What You'll Do
- Analyze attack paths, kill chains, and adversary strategies across realistic, production-style environments
- Identify and classify weaknesses, misconfigurations, and defensive gaps across modern infrastructure
- Review red-team style scenarios and intrusion narratives for accuracy, completeness, and real-world plausibility
- Generate, label, and validate adversarial reasoning data used to train and evaluate frontier AI systems
- Clearly explain how attacks propagate, where defenses fail, and how risk compounds across complex environments
- Work independently and asynchronously — fully on your own schedule
Who You Are
- 2+ years of hands-on experience in pentesting, red teaming, or a blue-team role with deep offensive knowledge
- Strong understanding of how real attacks unfold in production environments — from initial access to lateral movement and impact
- Able to think like an adversary and clearly communicate attack chains, tradeoffs, and impact to technical audiences
- Methodical and detail-oriented — you spot what others overlook and know how to document it clearly
- Comfortable working independently across a variety of threat scenarios and system types
Nice to Have
- Experience with threat modeling, adversary emulation, or structured red team engagements
- Familiarity with frameworks like MITRE ATT&CK, Cyber Kill Chain, or OWASP
- Background in security architecture, incident response, or threat intelligence
- Prior experience with AI tools, data labeling, or evaluation workflows
Why Join Us
- Work directly on frontier AI systems alongside leading AI research labs
- Fully remote and flexible — work when and where it suits you
- Freelance autonomy with the structure of meaningful, task-based work
- Apply your offensive security expertise to a genuinely novel and high-impact domain
- Potential for ongoing work and contract extension as new projects launch