hero
companies
Jobs

Staff Software Engineer, Platform Integrations

Twelve Labs

Twelve Labs

Software Engineering
United States · Remote
Posted on Mar 28, 2026

Location

Remote US

Employment Type

Full time

Location Type

Remote

Department

TechEngineering

Who we are

At Twelve Labs, we are pioneering the development of cutting-edge multimodal foundation models that have the ability to comprehend videos just like humans do. Our models have redefined the standards in video-language modeling, empowering us with more intuitive and far-reaching capabilities, and fundamentally transforming the way we interact with and analyze various forms of media.

With a remarkable $107 million in Seed and Series A funding, our company is backed by top-tier venture capital firms such as NVIDIA’s NVentures, NEA, Radical Ventures, and Index Ventures, and prominent AI visionaries and founders such as Fei-Fei Li, Silvio Savarese, Alexandr Wang and more. Headquartered in San Francisco, with an influential APAC presence in Seoul, our global footprint underscores our commitment to driving worldwide innovation.

We are a global company that values the uniqueness of each person’s journey. It is the differences in our cultural, educational, and life experiences that allow us to constantly challenge the status quo. We are looking for individuals who are motivated by our mission and eager to make an impact as we push the bounds of technology to transform the world. Join us as we revolutionize video understanding and multimodal AI.

About the Role

You'll own the infrastructure and integration layer that makes TwelveLabs models available on partner platforms. This is everything outside the model itself: how model containers are packaged, validated, and deployed; how API surfaces are designed and maintained per platform; how requests are routed; and how we ensure production reliability across fundamentally different cloud environments.

You'll work closely alongside our Science, Product, and ML Engineering teams to align the model and product roadmap for effective platform integrations. Your domain is the external model orchestration — you need to understand how model components function (to make good integration decisions), but you won't be optimizing the models themselves. Your work accelerates the ability to reliably ship new model versions and features to users across all platforms.

Candidates must be able to travel up to 10% of the time annually to attend conferences, off-site meetings, and other business-related events as required by the role. This role may require participation in on-site interviews and/or completion of in-person onboarding processes.

In this role, you will

  • Design and build infrastructure that deploys TwelveLabs models across multiple cloud and data platforms, accounting for differences in compute hardware, networking, APIs, and operational models

  • Own direct integrations into partner products — implementing the orchestration, data flow, and API surfaces that connect TwelveLabs models to partner-side functionality

  • Design and evolve CI/CD automation systems — including validation and deployment pipelines that reliably ship new model versions across platforms without regressions

  • Design interfaces and tooling abstractions across platforms that enable consistent deployment, reduce per-platform complexity, and scale as we add new partners

  • Implement API-level features and changes that require understanding model component behavior — routing, request handling, response formatting — without modifying model internals

  • Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across platform deployments

  • Analyze observability data across platforms to identify performance bottlenecks, cost anomalies, and regressions — and drive remediation based on production workloads

  • Collaborate with platform partner engineering teams to resolve operational issues, align on API contracts, and stand up end-to-end serving on new platforms

You may be a good fit if you have

  • Significant software engineering experience building and operating mission-critical backend systems at scale

  • Experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, infrastructure as code, or container orchestration

  • Strong interest in ML inference — you want to understand how models work, even if your primary contribution is the infrastructure around them

  • Ability to design highly observable systems that operate reliably at scale across multiple environments

  • Autonomy and ownership — you take problems end to end with a bias toward high-impact work

Preferred Qualifications

  • Direct experience working with cloud provider partner teams to scale infrastructure or products across multiple platforms — navigating differences in networking, security, billing, and managed service offerings

  • Background building platform-agnostic tooling or abstraction layers that work across cloud providers

  • Hands-on experience with capacity management, cost optimization, or resource planning at scale across heterogeneous environments

  • Familiarity with ML inference optimization, batching, caching, and serving strategies

  • Experience with ML infrastructure including GPUs, TPUs, Trainium, or other AI accelerators

  • Background designing CI/CD systems that automate deployment and validation across cloud environments

  • Proficiency in Python or Go

Benefits and Perks

🤝 An open and inclusive culture and work environment.

🚀 Work closely with a collaborative, mission-driven team on cutting-edge AI technology.

🏥 Full health, dental, and vision benefits

✈️ Extremely flexible PTO and parental leave policy. Office closed the week of Christmas and New Years.

🛂 VISA support where applicable