Member of Technical Staff (AI Inference Engineer)
Perplexity
San FranciscoFull-time17d ago
About the role
We build and run the inference engine behind every Perplexity query and deploy dozens of model architectures at scale with tight latency and cost budgets. Our stack is Rust, Python, CUDA, and CuTe DSL - and we need another engineer to join us.
WHAT YOU WILL WORK ON
Examples of real work the team does:
- New models support. Support transformer-based retrieval, text-generation, and multimodal models in our inference infrastructure, from weight loading, request scheduling and KV-cache management to support in API Gateway.
- GPU kernels migration to CuTe DSL. Port our in-house CUDA kernels to NVIDIA's CuTe DSL so they run on GB200 today and are portable to Vera Rubin racks tomorrow.
- Rust-native serving runtime. Develop our internal Rust-based inference server to solve all Python pains and keep up with rapidly growing traffic.
- Performance optimisation. Profile and fix bottlenecks from network ingress through continuous batching and GPU kernel interleaving.
- Reliability and
More at Perplexity
- Member of Technical Staff (Software Engineer, Computer Monetization)San Francisco
- Member of Technical Staff (Software Engineer, Monetization)San Francisco
- Member of Technical Staff (Software Engineer, Computer)San Francisco
- Product Manager (Builder)San Francisco
- Engineering Manager (Agents)San Francisco
- Engineering Manager (AI Research & Model Training)San Francisco