MACHINE LEARNING HARDWARE ENGINEER
Fantastic opportunity with a GROWING EV Auto Manufacture looking for a Machine Learning Hardware Engineer. This is a 100% remote contract opportunity.
100% remote. Will work CENTRAL TME.
12+ months to start.
We are seeking a Staff LLM–RAG Engineer (Contract) to lead the development and optimization of enterprise-grade retrieval-augmented generation systems. You will architect scalable AI solutions, integrate large language models with advanced retrieval pipelines, and ensure production readiness. This role combines deep technical expertise with the ability to guide teams and deliver results on aggressive timelines.
Most Important Skills/Responsibilities:
- Lead RAG Architecture Design – Define and implement best practices for retrieval-augmented generation systems, ensuring reliability, scalability, and low-latency performance.
- Full-Stack AI Development – Build and optimize multi-stage pipelines using LLM orchestration frameworks (LangChain, LangGraph, LlamaIndex, or custom).
- Programming & Integration – Develop services and APIs in Python and Golang to support AI workflows, document ingestion, and retrieval processes.
- Search & Retrieval Optimization – Implement hybrid search, vector embeddings, and semantic ranking strategies to improve contextual accuracy.
- Prompt Engineering – Design and iterate on few-shot, chain-of-thought, and tool-augmented prompts for domain-specific applications.
- Strong proficiency in Python and Golang or RUST, with experience building high-performance services and APIs.
Responsibilities
- Lead RAG Architecture Design – Define and implement best practices for retrieval-augmented generation systems, ensuring reliability, scalability, and low-latency performance.
- Full-Stack AI Development – Build and optimize multi-stage pipelines using LLM orchestration frameworks (LangChain, LangGraph, LlamaIndex, or custom).
- Programming & Integration – Develop services and APIs in Python and Golang to support AI workflows, document ingestion, and retrieval processes.
- Search & Retrieval Optimization – Implement hybrid search, vector embeddings, and semantic ranking strategies to improve contextual accuracy.
- Prompt Engineering – Design and iterate on few-shot, chain-of-thought, and tool-augmented prompts for domain-specific applications.
- Mentorship & Collaboration – Partner with cross-functional teams and guide engineers on RAG and LLM best practices.
- Performance Monitoring – Establish KPIs and evaluation metrics for RAG pipeline quality and model performance.
Ideal Background:
- 8+ years in software engineering or applied AI/ML, with at least 2+ years focused on LLMs and retrieval systems.
- Strong proficiency in Python and Golang or RUST, with experience building high-performance services and APIs.
- Expertise in RAG frameworks (LangChain, LangGraph, LlamaIndex) and embedding models.
- Hands-on experience with vector databases (Databricks Vector Store, Pinecone, Weaviate, Milvus, Chroma).