Join Our Team
Job title: GenAI Platform Lead
Location: Chennai
Experience: 15+ years
Education Qualification: Any Graduate
Role Summary:
- The GenAI Platform Lead is responsible for architecting, building, and operating an enterprise-grade Generative AI platform that enables product teams to rapidly deliver secure, scalable, and cost-efficient AI-powered applications. This is an engineering-led role that combines deep backend and platform engineering expertise with applied Generative AI systems design.
- The role focuses on production-grade LLM integration, ensuring platform reliability, governance, and developer enablement to support the development of robust and scalable AI solutions.
Mandatory Requirements:
- Minimum 15+ years of overall IT experience, with strong expertise in:
- Python
- RESTful APIs
- RDBMS
- Minimum 2+ years of hands-on experience in Generative AI / LLM-based systems, with experience in building or operating production-grade AI platforms or services.
- Proven experience in leading backend or platform engineering teams in an enterprise environment.
- Hands-on exposure to LLM platforms, including:
- Amazon Bedrock
- Azure OpenAI
- OpenAI
- Anthropic
- Strong ownership mindset with the ability to drive:
- Architecture
- Execution
- Governance
- Operational Excellence
Follow us on LinkedIn to know about our latest job openings!
Submit the form below to apply
| Job Level | 10 – 15+ Years |
Job title: Senior PySpark ETL Engineer
Location : Chennai
Experience : 10-12 years
Education Qualifications: Any Graduate
Role Summary:
- The Senior PySpark ETL Engineer is responsible for designing, building, optimizing, and operating scalable data pipelines using Apache Spark (PySpark).
- This role focuses on high-volume batch processing, with optional exposure to streaming data, ensuring performance, reliability, data quality, and cost efficiency across enterprise data platforms.
- The position requires strong Python expertise, hands-on experience with Spark, and deep knowledge of SQL and data modeling.
- It also demands the ability to take full ownership of data pipelines, managing them end-to-end in a production environment.
Mandatory Requirements:
- 10–12 years of overall IT experience, with strong focus on data engineering and ETL
- Minimum 3+ years of hands-on experience with PySpark / Apache Spark in production environments
- Strong experience in designing and implementing ETL / ELT pipelines at scale
- Excellent knowledge of SQL and relational database concepts
- Experience in handling large datasets in distributed environments
- Strong ownership mindset with excellent problem-solving skills and ability to independently manage production pipelines
- Hands-on experience with AWS EMR / Spark on Kubernetes / S3 and orchestration tools such as Airflow / Databricks / Azure Data Factory (ADF)
Follow us on LinkedIn to know about our latest job openings!
Submit the form below to apply
| Job Level | 10 – 15+ Years |
Job title: Technical Lead – Python Fullstack
Location: Chennai
Experience: 12+ years
Education Qualification: Any Graduate
Role Summary:
- Minimum 12+ years of IT experience, with strong expertise in Python, RESTful APIs, and RDBMS
- Proven leadership capability with excellent communication and interpersonal skills, along with a strong sense of ownership and accountability
- Ability to take end-to-end responsibility for technical design, delivery, and quality
Mandatory Skills:
- Minimum 12+ years of overall IT experience, with at least 5+ years in designing and developing backend/web applications using Python
- Strong expertise in Python backend frameworks such as Django, Flask, FastAPI, or equivalent
- Solid experience in API-first design, implementation, and optimization of RESTful services
- Deep understanding of middleware architecture, backend services, and system integrations
- Hands-on experience in database schema design, implementation, and performance tuning using RDBMS (PostgreSQL/MySQL)
- Strong proficiency in:
- Writing complex SQL queries
- Stored procedures, indexing, and performance optimization
- Transaction handling and data consistency
- Experience with ORM frameworks (Django ORM, SQLAlchemy) and understanding of their performance trade-offs
- Experience building cloud-native applications with focus on:
- Scalability, Performance, Security, and Reliability
- Code reviews, Git version control, CI/CD pipelines, deployment & operations
Follow us on LinkedIn to know about our latest job openings!
Submit the form below to apply
| Job Level | 10 – 15+ Years |
Job Title: Fulfillment Operations Analyst
Location: Ultralag
Education: Bachelor’s Degree preferred
Experience: 2–4+ years in operations / fulfillment / supply chain / order management
No of Openings: 1+
Summary :
- The Fulfillment Operations Analyst plays a critical role in ensuring a seamless order-to-cash process by managing fulfillment operations and coordinating across multiple teams such as Sales, Supply Chain, Finance, and Logistics.
- This role focuses on executing high-volume transactional workflows, maintaining data accuracy, resolving order issues, and supporting operational improvements to ensure timely and accurate delivery of customer orders.
Roles and Responsibilities
- Manage end-to-end fulfillment execution within ERP systems (e.g., NetSuite)
- Process and validate orders, including CMPOs, manual fulfillments, and shipment coordination
- Handle high-volume transactional workflows with strong accuracy and attention to detail
- Monitor and manage expedite requests and operational escalations
- Act as point of contact for order-related inquiries and provide timely responses
- Collaborate with cross-functional teams (Sales, Finance, Supply Chain, Logistics) to resolve order issues
- Track backlog and monitor order health to ensure on-time delivery
- Ensure data integrity across pricing, configurations, BOMs, and system records
- Identify and mitigate risks that could impact fulfillment timelines
- Support process improvements and contribute to operational efficiency initiatives
- Assist with reporting and documentation using tools like Google Sheets and Tableau
Required Skills
- Experience working with ERP systems (e.g., NetSuite)
- Familiarity with CRM tools such as Salesforce
- Strong analytical and problem-solving skills
- High attention to detail and accuracy
- Ability to manage high-volume workloads in fast-paced environments
- Strong communication and cross-functional collaboration skills
- Proficiency in Google Workspace (Sheets, Docs, Slides)
- Ability to manage multiple priorities and meet deadlines
Desired Skills
- Experience with Tableau, Asana, or Smartsheet
- Knowledge of B2B EDI transactions
- Background in supply chain, logistics, or order management
- Experience with process improvement initiatives
- Ability to work in global or cross-regional environments
Follow us on LinkedIn to know about our latest job openings!
Submit the form below to apply
| Job Level | 1 – 5+ Years |
Job Title: Data Engineer – Customer Success Analytics
Location: Hybrid, Ultralag, Heredia
Education: Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field (or equivalent experience)
Experience: 2+ years in data engineering, analytics engineering, or similar roles
No of Openings: 1
Summary:
- The Data Engineer – Customer Success Analytics is responsible for building and maintaining the data infrastructure that supports Customer Success operations and decision-making.
- This role focuses on developing scalable data pipelines, improving data models, and ensuring high-quality, reliable datasets for reporting and analytics.
- The ideal candidate is hands-on, detail-oriented, and passionate about transforming complex data into actionable insights that drive customer outcomes, operational efficiency, and future automation initiatives.
Roles and Responsibilities:
- Design, build, and maintain ETL/ELT pipelines from systems such as Salesforce and other operational platforms into data warehouses (e.g., Snowflake).
- Audit, normalize, and restructure existing data models, tables, and views to improve performance, consistency, and usability.
- Develop clean, scalable, and analytics-ready data models to support dashboards, reporting, and operational workflows.
- Translate business requirements (e.g., ARR, renewals, churn, consumption, customer health) into structured and well-documented data definitions.
- Investigate and resolve data discrepancies by identifying root causes and implementing long-term fixes.
- Optimize query performance, data processing, and overall data warehouse efficiency.
- Implement data validation frameworks, monitoring processes, and quality controls to ensure data accuracy and reliability.
- Document data lineage, transformations, and definitions to support governance and transparency.
- Collaborate with Data Analysts, Customer Success teams, and Operations to build scalable and reusable datasets.
- Prepare structured datasets to support automation initiatives, reporting improvements, and future AI-driven use cases.
Required Skills:
- 2+ years of experience working with data in roles such as Data Engineer, Data Analyst, Analytics Engineer, or similar.
- Strong SQL skills and experience with ETL/ELT processes and data modeling.
- Experience with at least one programming language (e.g., Python, Scala, C#, or similar).
- Hands-on experience with data warehouse technologies (e.g., Snowflake, BigQuery, Spark).
- Familiarity with data build tools such as DBT.
- Experience with version control tools (Git/GitHub) and development workflows.
- Strong understanding of data modeling principles (normalization, dimensional modeling, schema design).
- Proven ability to identify and improve data pipeline and reporting performance.
- Strong analytical and problem-solving skills, particularly in resolving data inconsistencies.
- Ability to work cross-functionally and communicate technical concepts clearly.
- English C1
Desired Skills:
- Experience working with Salesforce data models (Accounts, Opportunities, Contracts, Subscriptions).
- Familiarity with tools such as Tableau, Gainsight, or similar analytics platforms.
- Experience supporting SaaS or subscription-based business models (ARR, renewals, churn, consumption).
- Exposure to automation, predictive analytics, or AI-related data preparation.
- Experience with data governance, access control, and documentation standards.
- Knowledge of REST APIs and server-side technologies (e.g., Node.js, TypeScript, Python).
Follow us on LinkedIn to know about our latest job openings!
Submit the form below to apply
| Job Level | 1 – 5+ Years |
Recruitment fraud alert
The team at Movate has been alerted about fraudulent messages, hoax emails from certain employment agencies, and people requesting candidates for money in exchange for a position at Movate.
Movate is a merit-based employer and does not authorize any agency or individual to collect money or request a security cash deposit for employment at Movate.
As a job seeker, please be wary of the following guidelines to identify hoax job offers and emails:
- We don’t send job offers from email services like Gmail, Rediffmail, Yahoo mail, Hotmail, and other email ids.
- We never request money for any purpose before, during, or after the hiring process.
- The Movate recruitment team does not collect personal information or sensitive documents like bank account details or credit card information for hiring a candidate.
Stay safe and stay vigilant.
Employment Verification
For employment verification inquiries, kindly reach out to our dedicated team at employment.verification@movate.com. We’ll assist you in confirming relevant employment details as soon as possible.