Sr ML Engineer - REMOTE
Insight Global
NLT AI Summary
Job Description Day to Day: Designing and owning the architecture for enterprise AI platforms used across multiple LexisNexis products Building and scaling LLM‑powered systems (including RAG pipelines) that support legal research and decision‑making tools Designing agentic AI workflows where models reason, call tools/APIs, and execute multi‑step tasks Creating high‑availability, low‑latency inference systems for global, enterprise users Establishing platform standards for model deployment, monitoring, evaluation, and reliability Defining guardrails, permissions, and auditability for AI systems in a regulated legal environment Working closely with product, platform, and engineering teams to ensure AI systems are reusable and scalable Mentoring senior engineers and influencing technical direction across teams Ensuring Responsible AI principles are embedded into system design (safety, reliability, governance)
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy:
Skills and Requirements 10+ years building production ML systems (not research‑only) Hands‑on LLM experience in production, including: RAG architectures Inference performance, reliability, and monitoring Experience designing agentic AI systems (models calling tools/APIs, multi‑step workflows) Strong distributed systems architecture experience in cloud environments (AWS, Azure, or GCP) Kubernetes + containerization experience in production environments Strong Python engineering background (platform‑level code, not just notebooks) Experience building or contributing to enterprise AI platforms used by multiple teams Proven ability to lead technically (set standards, mentor engineers, influence architecture) Comfortable working in regulated or high‑reliability environments Direct experience with Model Context Protocol (MCP) servers or structured tool‑calling frameworks Deep experience with vector databases and large‑scale search systems Experience designing LLMOps / MLOps standards at the platform level Prior work in legal, financial, healthcare, or other regulated industries Exposure to Responsible AI governance, auditing, or compliance frameworks Experience building internal AI platforms rather than just end‑user applications Background mentoring senior engineers or leading cross‑team technical initiatives
#J-18808-Ljbffr