ProductBased.in

Land Your Dream Job at India's Top Product-Based Companies

Back to All Jobs

SDE II (AI/ML)

Mindtickle
Mindtickle logo
Location
Pune, Maharashtra
Job Type
Full-time
Posted
April 1, 2026

Job Description

Who we are   Mindtickle is the leading AI-powered revenue enablement platform that combines on-the-job learning and deal execution to drive behavior change and get more revenue per rep. Mindtickle is recognized as a market leader by top industry analysts and is ranked by G2 as the #1 sales onboarding and training product.   Our commitment to innovation has also earned us the "AI-based Sales Solution of the Year" award in the 8th annual AI Breakthrough Awards program (PR Newswire), and a Gold Stevie Award for Sales and Customer Service (Mindtickle)- recognition of our dedication to both product excellence and outstanding customer support. Role Overview

As an SDE-2 in CoE-ML, you are an independent contributor who owns modules end-to-end, brings strong engineering judgment to AI/ML problems, and actively raises the technical bar of the team. You have moved beyond task execution — you drive design, anticipate failure modes, and begin to influence the technical direction of your pod.

You will:

  • Own, improve, and extend production AI/ML components — deeply understanding what exists before choosing to build new.

  • Take end-to-end responsibility for the reliability, performance, and cost-efficiency of the AI modules you own.

  • Contribute meaningfully to architecture discussions and challenge designs with data and first-principles thinking.

  • Actively leverage AI-assisted development tools and agentic workflows to multiply your own productivity.

  • Mentor SDE-1 engineers and interns, sharing technical knowledge and engineering best practices.

  • Partner closely with product managers, QA, data engineering, and DevOps to ship cohesive AI-powered features.

Key Responsibilities

AI/ML Development & Productionization

  • Design, implement, and continuously improve production-grade AI/ML components — including LLM-powered features, RAG pipelines, agentic workflows, and model inference services. You are expected to deeply understand existing systems, identify opportunities to enhance their quality, reliability, or performance, and own those improvements end-to-end.

  • Improve and extend existing AI infrastructure — including prompt pipelines, retrieval systems, embedding workflows, and agentic orchestration layers — rather than defaulting to greenfield solutions.

  • Write clean, well-tested, maintainable code in Python (primary) and optionally Java or Go, following software engineering best practices.

  • Implement unit, integration, and regression tests for AI components, including evaluation harnesses for LLM output quality.

  • Contribute to CI/CD pipelines and ensure smooth deployment of AI services on AWS/Kubernetes infrastructure.

  • Optimize model inference for latency, throughput, and cost — identifying bottlenecks and proposing concrete solutions.

  • Model Quality & Evaluation

    • Build and maintain evaluation frameworks to assess model performance, output quality, and regression across releases — using platforms such as Maxim, LangFuse, or Weights & Biases.

    • Define and track quality metrics (precision, recall, BLEU, ROUGE, LLM-as-judge scores, or task-specific KPIs) for modules under ownership.

    • Contribute to prompt engineering, few-shot design, and model selection to measurably improve output quality.

    • Treat evaluation as an ongoing operational discipline — not a one-time pre-release check — and integrate it into the development and deployment lifecycle.

    • Identify data quality issues affecting model performance and work with data engineering to resolve them.

    • Production Operations & Observability

      • Monitor AI services in production using infrastructure observability tooling such as Datadog, Prometheus, and Grafana.

      • Use AI gateway platforms (e.g., LiteLLM, Portkey, TrueFoundry) to track LLM traffic, enforce per-project cost attribution, and maintain governance over model access across environments.

Who we are   Mindtickle is the leading AI-powered revenue enablement platform that combines on-the-job learning and deal execution to drive behavior change and get more revenue per rep. Mindtickle is recognized as a market leader by top industry analysts and is ranked by G2 as the #1 sales onboarding and training product.   Our commitment to innovation has also earned us the "AI-based Sales Solution of the Year" award in the 8th annual AI Breakthrough Awards program (PR Newswire), and a Gold Stevie Award for Sales and Customer Service (Mindtickle)- recognition of our dedication to both product excellence and outstanding customer support. Role Overview

As an SDE-2 in CoE-ML, you are an independent contributor who owns modules end-to-end, brings strong engineering judgment to AI/ML problems, and actively raises the technical bar of the team. You have moved beyond task execution — you drive design, anticipate failure modes, and begin to influence the technical direction of your pod.

You will:

  • Own, improve, and extend production AI/ML components — deeply understanding what exists before choosing to build new.

  • Take end-to-end responsibility for the reliability, performance, and cost-efficiency of the AI modules you own.

  • Contribute meaningfully to architecture discussions and challenge designs with data and first-principles thinking.

  • Actively leverage AI-assisted development tools and agentic workflows to multiply your own productivity.

  • Mentor SDE-1 engineers and interns, sharing technical knowledge and engineering best practices.

  • Partner closely with product managers, QA, data engineering, and DevOps to ship cohesive AI-powered features.

Key Responsibilities

AI/ML Development & Productionization

  • Design, implement, and continuously improve production-grade AI/ML components — including LLM-powered features, RAG pipelines, agentic workflows, and model inference services. You are expected to deeply understand existing systems, identify opportunities to enhance their quality, reliability, or performance, and own those improvements end-to-end.

  • Improve and extend existing AI infrastructure — including prompt pipelines, retrieval systems, embedding workflows, and agentic orchestration layers — rather than defaulting to greenfield solutions.

  • Write clean, well-tested, maintainable code in Python (primary) and optionally Java or Go, following software engineering best practices.

  • Implement unit, integration, and regression tests for AI components, including evaluation harnesses for LLM output quality.

  • Contribute to CI/CD pipelines and ensure smooth deployment of AI services on AWS/Kubernetes infrastructure.

  • Optimize model inference for latency, throughput, and cost — identifying bottlenecks and proposing concrete solutions.

  • Model Quality & Evaluation

    • Build and maintain evaluation frameworks to assess model performance, output quality, and regression across releases — using platforms such as Maxim, LangFuse, or Weights & Biases.

    • Define and track quality metrics (precision, recall, BLEU, ROUGE, LLM-as-judge scores, or task-specific KPIs) for modules under ownership.

    • Contribute to prompt engineering, few-shot design, and model selection to measurably improve output quality.

    • Treat evaluation as an ongoing operational discipline — not a one-time pre-release check — and integrate it into the development and deployment lifecycle.

    • Identify data quality issues affecting model performance and work with data engineering to resolve them.

    • Production Operations & Observability

      • Monitor AI services in production using infrastructure observability tooling such as Datadog, Prometheus, and Grafana.

      • Use AI gateway platforms (e.g., LiteLLM, Portkey, TrueFoundry) to track LLM traffic, enforce per-project cost attribution, and maintain governance over model access across environments.

      • Instrument and observe agentic workflows built on frameworks such as LangGraph or CrewAI — tracing multi-step executions, identifying failure points, and improving reliability.

      • Respond to production incidents, conduct root cause analysis, and implement preventive fixes.

      • Participate in on-call rotations and contribute to runbooks and post-mortems.

      • Proactively surface model drift, latency degradation, and cost anomalies before they escalate.

      • Architecture & Design

        • Lead low-level design (LLD) for features and modules under ownership, and actively participate in high-level design (HLD) discussions.

        • Surface tradeoffs around scalability, cost, and reliability — and back recommendations with data from production systems.

        • Document technical designs, API contracts, and component behaviours clearly and keep them up to date.

        • Propose and drive improvements to existing systems based on production learnings.

        • Collaboration & Communication

          • Work closely with SDE-3s and Tech Leads to align on design decisions and delivery plans.

          • Communicate progress, blockers, and technical risks clearly to the pod and stakeholders — without waiting to be asked.

          • Collaborate with product and QA to translate requirements into precise technical acceptance criteria.

          • Contribute to design reviews and provide constructive, evidence-based feedback on peers' work.

          • Mentorship & Knowledge Sharing

            • Mentor SDE-1s and interns on technical approaches, code quality, debugging methodology, and AI tooling.

            • Document learnings, failure analyses, and best practices for the team's knowledge base.

            • Participate in team tech talks, brown-bags, and internal AI community events.

            • AI-Native Ways of Working

              • Use AI-assisted development tools (e.g., C...

Ready to Apply?

Apply for this Position

You'll be redirected to the company's application page

Share this job:

Job Information

Source: lever
Remote Type: onsite
Allowed Locations: Pune, Maharashtra
Skills & Tags:
Engineering Engineering

Get Jobs Like This

New Mindtickle jobs and similar roles, straight to your inbox.

Weekly digest. Unsubscribe anytime.

🏙️

Considering Relocating for This Job?

Before you apply, see how far your salary will go in Pune, Maharashtra. Compare take-home pay, rent, food & transport costs vs other tech cities.

Check Cost of Living →