AI Solution Backend Developer
Certa
*About Certa: *
Certa (getcerta.com) is a Silicon Valley-based startup automating the vendor, supplier, and
stakeholder onboarding processes for businesses globally. Serving Fortune 500 and Fortune
1000 clients, Certa's engineering team tackles expansive and deeply technical challenges,
driving innovation in business processes across industries.
Role Overview
We are looking for an experienced and innovative AI Solution Backend Developer to join our team and push the boundaries of large language model (LLM) technology to drive significant impact in our products and services . In this role, you will leverage your strong software engineering skills (particularly in Python and cloud-based backend systems) and your hands-on experience with cutting-edge AI (LLMs, prompt engineering, Retrieval-Augmented Generation, etc.) to build intelligent features for enterprise (B2B SaaS). As an AI Engineer on our team, you will design and deploy AI-driven solutions (such as LLM-powered agents and context-aware systems) from prototype to production, iterating quickly and staying up-to-date with the latest developments in the AI space . This is a unique opportunity to be at the forefront of a new class of engineering roles that blend robust backend system design with state-of-the-art AI integration, shaping the future of user experiences in our domain.
Key Responsibilities
- Design and Develop AI Features: Lead the design, development, and deployment of generative AI capabilities and LLM-powered services that deliver engaging, human-centric user experiences . This includes building features like intelligent chatbots, AI-driven recommendations, and workflow automation into our products.
- RAG Pipeline Implementation: Design, implement, and continuously optimize end-to-end RAG (Retrieval-Augmented Generation) pipelines, including data ingestion and parsing, document chunking, vector indexing, and prompt engineering strategies to provide relevant context to LLMs . Ensure that our AI systems can efficiently retrieve and use information from knowledge bases to enhance answer accuracy.
- Build LLM-Based Agents: Develop and refine LLM-based agentic systems that can autonomously perform complex tasks or assist users in multi-step workflows. Incorporate tools for planning, memory, and context management (e.g. long-term memory stores, tool use via APIs) to extend the capabilities of our AI agents . Experiment with emerging best practices in agent design (planning algorithms, self-healing loops, etc.) to make these agents more reliable and effective.
- Integrate with Product Teams: Work closely with product managers, designers, and other engineers to integrate AI capabilities seamlessly into our products, ensuring that features align with user needs and business goals . You’ll collaborate cross-functionally to translate product requirements into AI solutions, and iterate based on feedback and testing.
- System Evaluation & Iteration: Rigorously evaluate the performance of AI models and pipelines using appropriate metrics – including accuracy/correctness, response latency, and avoidance of errors like hallucinations . Conduct thorough testing and use user feedback to drive continuous improvements in model prompts, parameters, and data processing.
- Code Quality & Best Practices: Write clean, maintainable, and testable code while following software engineering best practices . Ensure that the AI components are well-structured, scalable, and fit into our overall system architecture. Implement monitoring and logging for AI services to track performance and reliability in production.
- Mentorship and Knowledge Sharing: Provide technical guidance and mentorship to team members on best practices in generative AI development . Help educate and upskill colleagues (e.g. through code reviews, tech talks) in areas like prompt engineering, using our AI toolchain, and evaluating model outputs. Foster a culture of continuous learning and experimentation with new AI technologies.
- Research & Innovation: Continuously explore the latest advancements in AI/ML (new model releases, libraries, techniques) and assess their potential value for our products . You will have the freedom to prototype innovative solutions – for example, trying new fine-tuning methods or integrating new APIs – and bring those into our platform if they prove beneficial. Staying current with emerging research and industry trends is a key part of this role .
Required Skills and Qualifications
- Software Engineering Experience: 3+ years (Mid-level) / 5+ years (Senior) of professional software engineering experience. Rock-solid backend development skills with expertise in Python and designing scalable APIs/services. Experience building and deploying systems on AWS or similar cloud platforms is required (including familiarity with cloud infrastructure and distributed computing) . Strong system design abilities with a track record of designing robust, maintainable architectures is a must.
- LLM/AI Application Experience: Proven experience building applications that leverage large language models or generative AI. You have spent time prompting and integrating language models into real products (e.g. building chatbots, semantic search, AI assistants) and understand their behavior and failure modes . Demonstrable projects or work in LLM-powered application development – especially using techniques like RAG or building LLM-driven agents – will make you stand out .
- AI/ML Knowledge: Prioritize applied LLM product engineering over traditional ML pipelines. Strong chops in prompt design, function calling/structured outputs, tool use, context-window management, and the RAG levers that matter (document parsing/chunking, metadata, re-ranking, embedding/model selection). Make pragmatic model/provider choices (hosted vs. open) using latency, cost, context length, safety, and rate-limit trade-offs; know when simple prompting/config changes beat fine-tuning, and when lightweight adapters or fine-tuning are justified. Design evaluation that mirrors product outcomes: golden sets, automated prompt unit tests, offline checks, and online A/Bs for helpfulness/correctness/safety; track production proxies like retrieval recall and hallucination rate. Solid understanding of embeddings, tokenization, and vector search fundamentals, plus working literacy in transformers to reason about capabilities/limits. Familiarity with agent patterns (planning, tool orchestration, memory) and guardrail/safety techniques.
- Tooling & Frameworks: Hands-on experience with the AI/LLM tech stack and libraries. This includes proficiency with LLM orchestration libraries such as LangChain, LlamaIndex, etc., for building prompt pipelines . Experience working with vector databases or semantic search (e.g. Pinecone, Chroma, Milvus) to enable retrieval-augmented generation is highly desired.
- Cloud & DevOps: Own the productionization of LLM/RAG-backed services as high-availability, low-latency backends. Expertise in AWS (e.g., ECS/EKS/Lambda, API Gateway/ALB, S3, DynamoDB/Postgres, OpenSearch, SQS/SNS/Step Functions, Secrets Manager/KMS, VPC) and infrastructure-as-code (Terraform/CDK). You’re comfortable shipping stateless APIs, event-driven pipelines, and retrieval infrastructure (vector stores, caches) with strong observability (p95/p99 latency, distributed tracing, retries/circuit breakers), security (PII handling, encryption, least-privilege IAM, private networking to model endpoints), and progressive delivery (blue/green, canary, feature flags). Build prompt/config rollout workflows, manage token/cost budgets, apply caching/batching/streaming strategies, and implement graceful fallbacks across multiple model providers.
- Product and Domain Experience: Experience building enterprise (B2B SaaS) products is a strong plus . This means you understand considerations like user experience, scalability, security, and compliance. Past exposure to these types of products will help you design AI solutions that cater to a range of end-users.
- Strong Communication & Collaboration: Excellent interpersonal and communication skills, with an ability to explain complex AI concepts to non-technical stakeholders and create clarity from ambiguity . You work effectively in cross-functional teams and can coordinate with product, design, and ops teams to drive projects forward.
- Problem-Solving & Autonomy: Self-motivated and able to manage multiple priorities in a fast-paced environment . You have a demonstrated ability to troubleshoot complex systems, debug issues across the stack, and quickly prototype solutions. A “figure it out” attitude and creative approach to overcoming technical challenges are key.
Preferred (Bonus) Qualifications
- Full-Stack & Frontend Skills: While this is primarily a backend/AI role, having a deep full-stack background (especially modern frontend frameworks or Node.js) is beneficial . It will help you collaborate with frontend teams and build end-to-end solutions.
- Advanced AI Techniques: Familiarity with advanced techniques like fine-tuning LLMs (e.g. using LoRA adapters), reinforcement learning from human feedback (RLHF), or other emerging ML methodologies . Experience working with open-source LLMs (such as LLaMA, Mistral etc.) and their tooling (e.g. model quantization, efficient inference libraries) is a plus .
- Multi-Modal and Agents: Experience developing complex agentic systems using LLMs (for example, multi-agent systems or integrating LLMs with tool networks) is a bonus . Similarly, knowledge of multi-modal AI (combining text with vision or other data) could be useful as we expand our product capabilities.
- Startup/Agile Environment: Prior experience in an early-stage startup or similarly fast-paced environment where you’ve worn multiple hats and adapted to rapid changes . This role will involve quick iteration and evolving requirements, so comfort with ambiguity and agility is valued.
- Community/Research Involvement: Active participation in the AI community (open-source contributions, research publications, or blogging about AI advancements) is appreciated. It demonstrates passion and keeps you at the cutting edge. If you have published research or have a portfolio of AI side projects, let us know !
Perks:
- Best-in-class compensation
- Fully-remote work with flexible schedules
- Continuous learning
- Massive opportunities for growth
- Yearly offsite
- Quarterly hacker house
- Comprehensive health coverage
- Parental Leave
- Latest Tech Workstation
- Rockstar team to work with (we mean it!)