Member of Technical Staff - Inference

PrimeIntellect

PrimeIntellect

IT
San Francisco, CA, USA · Remote
Posted on Sep 17, 2025

Location

San Francisco, Remote

Employment Type

Full time

Location Type

Hybrid

Department

Engineering

Building the Future of Open Source + Decentralized AI

Prime Intellect is building the open superintelligence stack - from frontier agentic models to the infra that enables anyone to create, train, and deploy them. We aggregate and orchestrate global compute into a single control plane and pair it with the full rl post-training stack: environments, secure sandboxes, verifiable evals, and our async RL trainer. We enable researchers, startups and enterprises to run end-to-end reinforcement learning at frontier scale, adapting models to real tools, workflows, and deployment contexts.

We recently raised $15mm in funding (total of $20mm raised) led by Founders Fund, with participation from Menlo Ventures and prominent angels including Andrej Karpathy (Eureka AI, Tesla, OpenAI), Tri Dao (Chief Scientific Officer of Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), Emad Mostaque (Stability AI) and many others.

Role Impact

This is a hybrid position spanning cloud LLM serving, LLM inference optimization and RL systems. You will be working on advancing our ability to evaluate and serve models trained with our Environment Hub at scale. The two key areas are:

  1. Building the infrastructure to serve LLMs efficiently at scale.

  2. Optimization and integration of inference systems into our RL training stack.

Core Technical Responsibilities

LLM Serving

  • Multi‑tenant LLM Serving: Build a multi-tenant LLM serving platform that operates across our cloud GPU fleets.

  • GPU‑Aware Scheduling: Design placement and scheduling algorithms for heterogeneous accelerators.

  • Resilience & Failover: Implement multi‑region/zone failover and traffic shifting for resilience and cost control.

  • Autoscaling & Routing: Build autoscaling, routing, and load balancing to meet throughput/latency SLOs.

  • Model Distribution: Optimize model distribution and cold-start times across clusters.

Inference Optimization & Performance

  • Framework Development: Integrate and contribute to LLM inference frameworks such as vLLM, SGLang, TensorRT‑LLM.

  • Parallelism and Configuration Tuning: Optimize configurations for tensor/pipeline/expert parallelism, prefix caching, memory management and other axes for maximum performance.

  • End‑to‑End Performance: Profile kernels, memory bandwidth and transport; apply techniques such as quantization and speculative decoding.

  • Perf Suites: Develop reproducible performance suites (latency, throughput, context length, batch size, precision).

  • RL Integration: Embed and optimize distributed inference within our RL stack.

Platform & Tooling

  • CI/CD: Establish CI/CD with artifact promotion, performance gates, and reproducible builds.

  • Observability: Build metrics, logs, tracing; structured incident response and SLO management.

  • Docs & Collaboration: Document architectures, playbooks, and API contracts; mentor and collaborate cross‑functionally.

Technical Requirements

Required Experience

  • Building ML Systems at Scale: 3+ years building and running large‑scale ML/LLM services with clear latency/availability SLOs.

  • Inference Backends: Hands‑on with at least one of vLLM, SGLang, TensorRT‑LLM.

  • Distributed Serving Infra: Familiarity with distributed and disaggregated serving infrastructure such as NVIDIA Dynamo.

  • Inference Internals: Deep understanding of prefill vs. decode, KV‑cache behavior, batching, sampling, speculative decoding, parallelism strategies.

  • Full‑Stack Debugging: Comfortable debugging CUDA/NCCL, drivers/kernels, containers, service mesh/networking, and storage, owning incidents end‑to‑end.

Infrastructure Skills

  • Python: Systems tooling and backend services.

  • PyTorch: LLM Inference engine development and integration, deployment readiness.

  • Cloud & Automation: AWS/GCP service experience, cloud deployment patterns.

  • Kubernetes: Running infrastructure at scale with containers on Kubernetes.

  • GPU & Networking: Architecture, CUDA runtime, NCCL, InfiniBand; GPU‑aware bin‑packing and scheduling across heterogeneous fleets.

Nice to Have

  • Kernel‑Level Optimization: Familiarity with CUDA/Triton kernel development; Nsight Systems/Compute profiling.

  • Systems Performance Languages: Rust, C++.

  • Data & Observability: Kafka/PubSub, Redis, gRPC/Protobuf; Prometheus/Grafana, OpenTelemetry; reliability patterns.

  • Infra & Config Automation: Terraform/Ansible, infrastructure-as-code, reproducible environments

  • Open Source: Contributions to serving, inference, or RL infrastructure projects.

What We Offer

  • Competitive compensation with significant equity incentives

  • Flexible work arrangement (remote or San Francisco office)

  • Full visa sponsorship and relocation support

  • Professional development budget

  • Regular team off-sites and conference attendance

  • Opportunity to shape decentralized AI and RL at Prime Intellect

Growth Opportunity

You'll join a team of experienced engineers and researchers working on cutting-edge problems in AI infrastructure. We believe in open development and encourage team members to contribute to the broader AI community through research and open-source contributions.

We value potential over perfection. If you're passionate about democratizing AI development, we want to talk to you.

Ready to help shape the future of AI? Apply now and join us in our mission to make powerful AI models accessible to everyone.