You will make Lyceum's AI inference platform reliable, secure, and scalable - ensuring it performs under pressure as we grow to thousands of concurrent users. While others on the team expand what the platform can do, your job is to make sure it keeps working, fails gracefully, and gets faster over time.
Your focus
- Scalability: Architect and implement the systems that allow our inference platform to scale to thousands of concurrent users. This includes request routing, load balancing, autoscaling, and resource scheduling across GPU clusters.
- Reliability and observability: Build robust monitoring, alerting, and incident response tooling. Design for graceful degradation, automatic recovery, and minimal downtime.
- Performance engineering: Profile and optimise the full inference path from request ingestion through model execution to response delivery. Identify and eliminate bottlenecks at every layer.
- Infrastructure evolution: Evaluate and integrate open-source inference frameworks and tooling (Dynamo, vLLM, Triton, etc.) where they improve throughput, latency, or stability of the serving stack.
Your KPIs
- Platform uptime and availability (SLA adherence)
- P50/P95/P99 latency and throughput under load
- Time-to-detection and time-to-resolution for incidents
- Scalability milestones (concurrent users, requests per second, GPU utilisation)
