Jobs
Interviews

3093 Data Processing Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 4.0 years

2 - 6 Lacs

Nagar

Work from Office

At Davies North America, we re at the forefront of innovation and excellence, blending cutting-edge technology with top-tier professional services. As a vital part of the global Davies Group, we help businesses navigate risk, optimize operations, and spearhead transformation in the insurance and regulated sectors. Were on the lookout for an Indexer to join our growing team. As an Indexer, you will organize and make accessible large volumes of documents by indexing to the correct category to facilitate quick and accurate retrieval of information.

Posted 1 week ago

Apply

2.0 - 6.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Become part of our team! Team introduction Department of Data and Analytics has its hub at DTICI Bengaluru India catering to data and analytics requirements for Daimler Truck. we are 150+ strong team comprising of Data engineers, Data scientists, AI engineers and Analysts What makes the team/department special Highly energetic and young team that solves business problems that have the potential to define we build our trucks Good Communication & presentation skills. Strong Knowledge of Azure Components Azure Data Lake, Azure Data Factory, Azure SQL, Azure Databricks Master in Statistics/Machine Learning/Economics/ Big Data analytics (Full -Time) Strong experience in object-oriented programming using Python & PySpark Experience developing, training, and evaluating (supervised/unsupervised) machine learning models such as linear regression, Logistic regression, k-means, time series forecasting, Hypothesis testing (ANOVA, t-test, etc.), random forest, SVMs, Naive Bayes, gradient boosting, kNN, Deep learning algorithms like CNN, ANN and Reinforcement learning, Anomaly detection Familiar with statistical distributions, correlation analysis, descriptive statistics, ROC, F1-Score, etc. Proficiency in Python, Scikit-learn and ML libraries. Strong understanding of statistical modeling and data preprocessing. Should have had at least 2 to 3 use cases where the candidate would have gained the experience of end to end development of Data science solution and deployment Problem definition - white boarding Hypothesis building and testing and Exploratory data analysis Data engineering to curate data for model Training, testing, deployment and MLops Job Title: Data Scientist (Iterative Model Development Traditional ML) Job Summary: We are seeking a Data Scientist with expertise in building and optimizing traditional ML models, specifically within the automobile domain. The ideal candidate will have strong experience in classical ML algorithms, survival models, time series, statistical modelling, and deployment of predictive models. Experience with streaming data and telematics data solutions is highly desirable. Key Responsibilities: Work with Daimler s Businesses and Enabling Areas to support key business functions Contribute to storyboarding our results, support recommendations for an executive-level audience and produce leadership-quality deliverables Develop and optimize traditional ML models (regression, decision trees, SVM, XGBoost, etc.) with strong knowledge on Hypothesis testing. Perform feature engineering and dataset preprocessing for model training. Build scalable machine learning pipelines for real-world applications. Work on time series forecasting, anomaly detection, recommendation systems, etc. Optimize model performance using hyperparameter tuning and cross-validation. Deploy ML models into production using cloud and on-premises solutions. Collaborate with cross-functional teams to integrate ML models into business processes. Analyze and interpret telematics data to derive actionable insights. Develop solutions for streaming data processing and real-time analytics.

Posted 1 week ago

Apply

7.0 - 13.0 years

40 - 45 Lacs

Gurugram

Work from Office

What s in it for YOU SBI Card truly lives by the work-life balance philosophy. We offer a robust wellness and wellbeing program to support mental and physical health of our employees. Admirable work deserves to be rewarded. We have a well curated bouquet of rewards and recognition program for the employees. Dynamic, Inclusive and Diverse team culture Gender Neutral Policy Inclusive Health Benefits for all - Medical Insurance, Personal Accidental, Group Term Life Insurance and Annual Health Checkup, Dental and OPD benefits Commitment to the overall development of an employee through comprehensive learning & development framework Role Purpose AI & Digital channel leader will be primarily responsible for Digital Portals (Customer facing & Internal) & AI platforms (including conversational platforms and AI Process automation using robotics). This primarily covers Digital Portals (Website, Intranet, Audit Portal, Invoice portal, Scrabble, Goldmine, Digital Nerve Center, SPRINT, Insta SMS) & AI platforms (ILA, Drishti, WhatsApp, Live chat, RPA platform). Role Accountability Role will lead all the digital portals, conversational platforms and process automation using RPA. Developing and refining AI models to automate data processing and decision-making. Ensure the data used is accurate, relevant, and compliant with regulations. Creating and implementing strategies for AI and digital channels to align with business goals. Using data analytics to take informed decisions and optimize performance across digital channels. Overseeing the integration and deployment of AI technologies within digital channels. Collaborating with IT and development teams to ensure smooth implementation. This role is accountable for - Digital Strategy & Platforms: Lead digital portals, Develop Gen AI strategy, conversational AI, RPA; drive adoption to enhance CX and reduce operational cost. Program Leadership: Manage large-scale digital transformations and IT program planning, execution, risk mitigation, and stakeholder alignment. Business Alignment: Act as IT lead for Marketing, CS & Post-acquisition functions; bridge business and tech with deep card domain understanding. Service Delivery: Ensure ITIL-based service operations across digital channels with strong SLA adherence and high application availability. Team & People Management: Build and lead high-performing teams with a focus on collaboration, growth, org structure, and delivery ownership. Governance & Reviews: Drive weekly program reviews and steering committees; maintain tight governance with risk/issue tracking and reporting. Budgeting & Controllership: Lead budget planning (OPEX/CAPEX), track expenses, drive program approvals and financial discipline. Innovation & Tech Trends: Promote ideation, innovation, and ongoing tech awareness to shape future-ready digital initiatives. Cross-functional Collaboration: Engage CSMO, Ops, and CS leaders to co-create and deliver business solutions with clarity and rigor. Change Management: Navigate complex stakeholder environments to drive large-scale change and cross-team alignment. Measures of Success Strategic initiatives delivered on time, within budget, and meeting defined business benefits. Monthly connects with Senior Leadership and DRs conducted effectively to align business priorities. Programs and solutions prioritized with clear business agreement and strategic alignment. Low attrition with a motivated, high-performing team consistently meeting their deliverables. Project review and prioritization meetings operationalized monthly with no major business escalations. Operational metrics consistently achieved as agreed SLAs across systems and processes. Zero P1 defects in production; all programs delivered with quality assurance and budget compliance. Technical Skills / Experience / Certifications Engineering Mindset Strong technology orientation DevOps Execution Hands-on operational expertise Automation Focus Process-first automation drive Monitoring Skills Proficient in tools/scripting Strong understanding of AI technologies & platforms -GenAI LLMs, Langchain, UI Path and automation process frameworks Competencies critical to the role Technology Leadership Strategic tech vision Delivery Mindset Resourceful execution focus Program Management Risk-aware governance. Communication Skills Clear, effective articulation Customer Centricity Stakeholder-first thinking Analytical Thinking Data-driven problem solving. Operational Excellence Productivity and impact Collaboration Skills Team-oriented approach Qualification B. Tech/ MBA from reputed business school Preferred Industry Credit Cards / NBFC/BFSI/Financial domain

Posted 1 week ago

Apply

4.0 - 9.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Job description Building off our Cloud momentum, Oracle has formed a new organization - Health Data Intelligence. This team will focus on product development and product strategy for Oracle Health, while building out a complete platform supporting modernized, automated healthcare. This is a net new line of business, constructed with an entrepreneurial spirit that promotes an energetic and creative environment. We are unencumbered and will need your contribution to make it a world class engineering center with the focus on excellence. Oracle Health Data Analytics has a rare opportunity to play a critical role in how Oracle Health products impact and disrupt the healthcare industry by transforming how healthcare and technology intersect. Responsibilities : As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. Define specifications for significant new projects and specify, design and develop software according to those specifications. You will perform professional software development tasks associated with the developing, designing and debugging of software applications or operating systems. Design and build distributed, scalable, and fault-tolerant software systems. Build cloud services on top of the modern OCI infrastructure. Participate in the entire software lifecycle, from design to development, to quality assurance, and to production. Invest in the best engineering and operational practices upfront to ensure our software quality bar is high. Optimize data processing pipelines for orders of magnitude higher throughput and faster latencies. Leverage a plethora of internal tooling at OCI to develop, build, deploy, and troubleshoot software. Qualifications 4+ years of experience in the software industry working on design, development and delivery of highly scalable products and services. Understanding of the entire product development lifecycle that includes understanding and refining the technical specifications, HLD and LLD of world-class products and services, refining the architecture by providing feedback and suggestions, developing, and reviewing code, driving DevOps, managing releases and operations. Strong knowledge of Java or JVM based languages. Experience with multi-threading and parallel processing. Strong knowledge of big data technologies like Spark, Hadoop Map Reduce, Crunch, etc. Past experience of building scalable, performant, and secure services/modules. Understanding of Micro Services architecture and API design Experience with Container platforms Good understanding of testing methodologies. Experience with CI/CD technologies. Experience with observability tools like Spunk, New Relic, etc Good understanding of versioning tools like Git/SVN.

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Mumbai

Work from Office

About this role Aladdin Data: BlackRock is one of the world s leading asset management firms and Aladdin is the firm s an end-to-end operating system for investment professionals to see their whole portfolio, understand risk exposure, and act with precision. Aladdin is our operating platform to manage financial portfolios. It unites client data, operators, and technology needed to manage transactions in real time through every step of the investment process. Aladdin Data is at the core of the Aladdin platform, and increasingly, our ability to consume, store, analyze, and gain insight from data is a key component of our competitive advantage. Our mission is to deliver critical insights to our stakeholders, enabling them to make data-driven decisions. BlackRock s Data Operations team is at the heart of our data ecosystem, ensuring seamless data pipeline operations across the firm. Within this team, the Process Engineering group focusses on building tools to enhance observability, improve operator experience, streamline operations, and provide analytics that drive continuous improvement across the organization. Key Responsibilities Strategic Leadership Drive the roadmap for process engineering initiatives that align with broader Data Operations and enterprise objectives. Partner on efforts to modernize legacy workflows and build scalable, reusable solutions that support operational efficiency, risk reduction, and enhanced observability. Define and track success metrics for operational performance and process health across critical data pipelines. Process Engineering & Solutioning Design and develop tools and products to support operational efficiency, observability, risk management, and KPI tracking. Define success criteria for data operations in collaboration with stakeholders across teams. Break down complex data challenges into scalable, manageable solutions aligned with business needs. Proactively identify operational inefficiencies and deliver data-driven improvements. Data Insights & Visualization Design data science solutions to analyze vendor data trends, identify anomalies, and surface actionable insights for business users and data stewards. Develop and maintain dashboards (e.g., Power BI, Tableau) that provide real-time visibility into vendor data quality, usage patterns, and operational health. Create metrics and KPIs that measure vendor data performance, relevance, and alignment with business needs. Quality Control & Data Governance Build automated QC frameworks and anomaly detection models to validate data integrity across ingestion points. Work with data engineering and governance teams to embed robust validation rules and control checks into pipelines. Reduce manual oversight by building scalable, intelligent solutions that detect, report, and in some cases self-heal data issues. Testing & Quality Assurance Collaborate with data engineering and stewardship teams to validate data integrity throughout ETL processes. Lead the automation of testing frameworks for deploying new datasets or new pipelines. Collaboration & Delivery Work closely with internal and external stakeholders to align technical solutions with business objectives. Communicate effectively with both technical and non-technical teams. Operate in an agile environment, managing multiple priorities and ensuring timely delivery of high-quality data solutions. Experience & Education 8+ years of experience in data engineering, data operations, analytics, or related fields, with at least 3 years in a leadership or senior IC capacity. Bachelors or Master s degree in a quantitative field (Computer Science, Data Science, Statistics, Engineering, or Finance). Experience working with financial market data providers (e.g., Bloomberg, Refinitiv, MSCI) is highly valued. Proven track record of building and deploying ML models. Technical Expertise Deep proficiency in SQL and Python, with hands-on experience in data visualization (Power BI, Tableau), cloud data platforms (e.g., Snowflake), and Unix-based systems. Exposure to modern frontend frameworks (React JS) and microservices-based architectures is a strong plus. Familiarity with various database systems (Relational, NoSQL, Graph) and scalable data processing techniques. Leadership & Communication Skills Proven ability to lead cross-functional teams and influence without authority in a global matrixed organization. Exceptional communication skills, with a track record of presenting complex technical topics to senior stakeholders and non-technical audiences. Strong organizational and prioritization skills, with a results-oriented mindset and experience in agile project delivery. Preferred Qualifications Certification in Snowflake or equivalent cloud data platforms Certification in Power BI or other analytics tools Experience leading Agile teams and driving enterprise-level transformation initiatives Our benefits . Our hybrid work model BlackRock s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. . This mission would not be possible without our smartest investment the one we make in our employees. It s why we re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com / company / blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.

Posted 1 week ago

Apply

4.0 - 9.0 years

11 - 15 Lacs

Bengaluru

Work from Office

About Plum Plum is an employee insurance and health benefits platform focused on making health insurance simple, accessible and inclusive for modern organizations. Healthcare in India is seeing a phenomenal shift with inflation in healthcare costs 3x that of general inflation. A majority of Indians are unable to afford health insurance on their own; and so as many as 600mn Indians will likely have to depend on employer-sponsored insurance. Plum is on a mission to provide the highest quality insurance and healthcare to 10 million lives by FY2030, through companies that care. Plum is backed by Tiger Global and Peak XV Partners. Position Overview We are seeking an experienced Senior Cybersecurity Engineer with 4+ years of expertise to lead our security initiatives and protect our healthcare platform. This role is critical in ensuring the security, privacy, and compliance of our systems that handle sensitive healthcare data for millions of users while enabling rapid business growth. Key Responsibilities Core Security Expertise Demonstrate deep understanding of security domain principles and concepts across multiple disciplines Lead expertise across critical security domains including: Advanced Incident Response and forensics Red Team operations and adversarial simulation Sophisticated Malware Analysis and reverse engineering Attack metrics development and threat modeling Comprehensive Vulnerability Assessment & Penetration Testing Proactive Threat Hunting & Root Cause Analysis Malicious Code analysis and deciphering techniques Advanced SIEM Analysis, XDR integration, and SOAR orchestration Execute complex incident triage based on advanced security parameters and established methodologies Leverage strong scripting expertise (Python, C#, JSON, shell scripting) for security automation and tool development Design and architect secure systems, networks, and application infrastructures for healthcare environments Maintain hands-on expertise with enterprise security tools including Symantec Endpoint Protection & Encryption, Tenable Nessus, Kali Linux, and Burp Suite Cloud Security Architecture & Engineering Design and implement enterprise-grade secure cloud architectures aligned with industry frameworks (CIS, NIST, ISO 27001) Define, maintain, and enforce security patterns for Infrastructure as Code implementations using Terraform and Helm Architect comprehensive security for AWS and GCP services, Kubernetes clusters (EKS/GKE), serverless functions, and containerized workloads Lead the implementation of zero-trust security models and micro-segmentation strategies Design secure multi-cloud and hybrid cloud architectures for healthcare data processing Security Operations & Monitoring Implement and optimize native cloud security tools including AWS Security Hub, GCP Security Command Center, and integrated third-party platforms Deploy and manage advanced security platforms including CrowdStrike, Snyk, Wiz, Prisma Cloud, and SentinelOne Configure and maintain Cloud Security Posture Management Integrate comprehensive security posture monitoring with observability tools like DataDog and enterprise SIEM platforms Conduct regular security audits, automated vulnerability assessments, and compliance verification checks Develop custom security metrics and KPIs for executive reporting Incident Response & Threat Detection Lead investigation and response activities for complex cloud-based security incidents and data breaches Develop, maintain, and continuously improve incident response playbooks and forensics procedures Leverage threat intelligence feeds and frameworks to enhance detection capabilities and threat hunting activities Coordinate with external security vendors and law enforcement during major incidents Conduct post-incident reviews and implement preventive measures Governance, Risk & Compliance Support and lead regulatory audits, comprehensive risk assessments, and compliance initiatives (ISO27001, GDPR, SOC2) Define, implement, and enforce enterprise cloud security standards, policies, and procedures Provide subject matter expertise in secure access management, data protection strategies, and encryption key management Manage vendor security assessments and third-party risk evaluations Develop and maintain security awareness training programs for technical and non-technical staff Required Qualifications Experience & Education 4+ years of hands-on experience in cybersecurity roles with a proven track record of securing production environments at scale Bachelors or Masters degree in Computer Science, Cybersecurity, Information Security, or related technical field Experience in healthcare, fintech, or other highly regulated industries is strongly preferred Core Technical Expertise Cloud Security Platforms : Expert-level proficiency in cloud security architectures: AWS : Deep knowledge of AWS security services, Security Hub, GuardDuty, CloudTrail, Config, IAM, KMS, and VPC security GCP : Comprehensive understanding of Security Command Center, Cloud Security Scanner, Identity and Access Management, and VPC security controls Security Tools & Platforms : Hands-on experience with enterprise security solutions including: Endpoint protection: Symantec Endpoint Protection & Encryption Vulnerability management: Tenable Nessus, penetration testing frameworks Security testing: Kali Linux, Burp Suite, OWASP methodologies Cloud security: CrowdStrike, Snyk, Wiz, Prisma Cloud, SentinelOne Container & Kubernetes Security : Advanced proficiency in securing containerized environments including RBAC, network policies, admission controllers, and Pod Security Standards Programming & Scripting : Strong development skills in Python, Bash, Go, and Infrastructure as Code tools (Terraform, CloudFormation) Authentication & Authorization : In-depth understanding of modern identity protocols including OAuth2, OpenID Connect (OIDC), SAML, and zero-trust architectures Security Specializations Incident Response : Proven experience leading complex security incident investigations and coordinating response activities Threat Intelligence : Experience with threat hunting, malware analysis, and leveraging threat intelligence platforms DevSecOps Integration : Hands-on experience integrating security tools into CI/CD pipelines and implementing security-as-code practices Security Architecture : Experience designing secure system architectures and implementing defense-in-depth strategies Professional Certifications Required Certifications (minimum 2 of the following): AWS Certified Security - Specialty Google Professional Cloud Security Engineer Certified Information Systems Security Professional (CISSP) Certified Kubernetes Security Specialist (CKS) Certified Information Security Manager (CISM) CompTIA Security+ Certified Ethical Hacker (CEH) Leadership & Communication Skills Proven ability to lead security initiatives and mentor junior security professionals Experience with crisis management and executive-level security reporting Strong written and verbal communication skills for technical and non-technical audiences Ability to work independently while collaborating effectively across cross-functional teams Experience with security awareness training and building security culture

Posted 1 week ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Gurugram

Work from Office

About Syfe Syfe is a digital investment platform with a mission to empower people to grow their wealth for a better future. Built on the pillars of advice, access and innovation, we cater to the full spectrum of an individuals wealth needs across diversified proprietary portfolios, cash management solutions and a state-of-the-art brokerage. The Syfe team combines world-class financial expertise with best in-class technology talent. Excellence in execution is in our DNA and we offer equity ownership to all employees regardless of seniority and designation. We are regulated by the financial authorities across Singapore, Hong Kong and Australia. In Singapore alone, where we are headquartered, over 250,000 investors trust Syfe to grow their wealth. Since its founding, Syfe has raised US$132 million from world-class investors. The company has won multiple awards including Wealth Management Fintech of the Year by the Asian Banking and Finance Awards, as well as being recognized as one of the Top LinkedIn Startups in Singapore. About the Role As a Principal Engineer at Syfe, you ll operate as a technical thought leader and hands-on architect, partnering with engineering leaders and cross-functional stakeholders to solve some of the hardest problems in the fintech space. This role is ideal for engineers who love systems thinking, sweat over design docs, and are obsessed with building resilient, scalable platforms that stand the test of time.Key Responsibilities Drive Architectural Vision: Define and evolve architectural patterns across teams. Ensure choices today won t become regrets tomorrow by designing with extensibility and scale in mind. Solve for Complexity, Not Simplicity: Tackle systemic and ambiguous problems across domains from low-latency order execution to real-time data processing and observability. Technical Leadership Without Authority: Influence tech decisions across pods without needing direct reporting lines. Provide architectural reviews, hands-on POCs, and be a trusted voice across teams. Raise the Engineering Bar: Mentor senior engineers, guide them on trade-offs, and instill engineering rigor across the organization. You re not just writing code, you re multiplying impact. Strategic Alignment: Collaborate with Product, Business, and Engineering stakeholders to align long-term tech investments with business bets. Champion Engineering Excellence: Continuously improve engineering practices from CI/CD, infra as code, to observability and incident response. What We re Looking For 10+ years of hands-on experience in backend/platform engineering, with a track record of driving architectural decisions at org-level scale. Expertise in designing and operating distributed systems (think microservices, event-driven architectures, or CQRS patterns). Experience with multi-region HA, horizontal scaling, and managing technical debt pragmatically. Proven ability to navigate trade-offs in system design, security, data consistency, and latency. You ve led cross-functional initiatives involving multiple pods/teams and seen them through from conception to production. Strong programming expertise in one or more languages e.g., Go, Java, Kotlin, or similar. Curious, low-ego, and outcome-driven. You make others better just by working with you. Bonus Points Fintech domain experience (wealth, trading, payments etc.). Experience with Kubernetes, Kafka, and modern cloud-native infra stacks. The Syfe Advantages: Annual learning allowance for work related online courses and books Annual recreational allowance Allowance for home-office setup Latest M1 Macbook Pro + as required hardware and software Best of all, our speciality is helping people manage their money. We will help you learn how to manage your own money like a pro Medical Insurance

Posted 1 week ago

Apply

8.0 - 15.0 years

10 - 15 Lacs

Pune

Work from Office

Cohesity is the leader in AI-powered data security. Over 13,600 enterprise customers, including over 85 of the Fortune 100 and nearly 70% of the Global 500, rely on Cohesity to strengthen their resilience while providing Gen AI insights into their vast amounts of data. Formed from the combination of Cohesity with Veritas enterprise data protection business, the company s solutions secure and protect data on-premises, in the cloud, and at the edge. Backed by NVIDIA, IBM, HPE, Cisco, AWS, Google Cloud, and others, Cohesity is headquartered in Santa Clara, CA, with offices around the globe. We ve been named a Leader by multiple analyst firms and have been globally recognized for Innovation, Product Strength, and Simplicity in Design , and our culture . Want to join the leader in AI-powered data security Join a team shaping the future of global-ready infrastructure software blending deep system engineering with advanced localization, automation, and AI. We partner across products, QA, content, and engineering to deliver technically robust, culturally accurate solutions at scale. From global cloud releases to region-specific UX validation, we lead in globalization (G11n), localization (L10n), and intelligent automation. If youre driven to scale quality globally through modern engineering, this is your team. We re seeking a Senior SDET with deep expertise in Storage, Networking, Virtualization, and Cloud plus hands-on experience in Python automation, AI/ML, and Localization Engineering. In this strategic role, you ll lead efforts to embed robust globalization (G11n) and localization (L10n) into enterprise-scale infrastructure products. Youll architect solutions, drive engineering best practices, and ensure international readiness across cloud-native systems. An ideal fit for those who excel at the intersection of deep tech, intelligent automation, and global user experience. HOW YOU LL SPEND YOUR TIME HERE Lead the technical design and automation of globalized and localized systems for Storage, Backup, Virtualization, and Cloud platforms. Partner with cross-functional teams including Product, QA, DevOps, and Localization teams to integrate internationalization (i18n) and localization early into the software development lifecycle. Drive implementation of Python-based test automation for localization validation, AI-driven content verification, and workflow optimization. Provide technical leadership for integrating AI/ML models into localization quality workflows, including content extraction, translation validation, and context-based language improvements. Guide teams in implementing virtualized test environments for simulating geo-localized behaviors across regions and languages. Drive strategy and execution for global release-readiness of infrastructure products, ensuring alignment with market-specific requirements. Collaborate with Localization QA teams (LQA, GLQA) to build automated pipelines for end-to-end localization testing in client/vendor environments. Mentor junior engineers and act as a technical escalation point for localization automation and cloud-based testing infrastructure. Continuously monitor performance of localization systems, optimize test coverage, and provide insight through data analytics and reporting. Evaluate and integrate emerging technologies and AI-based localization platforms (LLMs, machine translation, etc.) into engineering pipelines. WE D LOVE TO TALK TO YOU IF YOU HAVE MANY OF THE FOLLOWING Bachelor s or Master s degree in Computer Science, Engineering, or related technical field. 8 15+ years of experience in systems-level engineering with a focus on Storage, Networking, Virtualization, or Cloud technologies. Strong programming expertise in Python, with a background in test automation and scripting for infrastructure systems. Hands-on experience with AI/ML frameworks (e.g., spaCy, OpenAI, Transformers) and their application in localization workflows. Solid understanding of G11n, L10n, i18n principles, and industry-standard localization workflows. Experience with CI/CD pipelines, cloud infrastructure (AWS, Azure, or GCP), and virtualization platforms (VMware, KVM, Hyper-V). Familiarity with tools such as Robot Framework, Selenium, Postman, REST Assured, Docker, Kubernetes, Terraform, or Ansible. Proven ability to lead cross-functional technical initiatives, drive architectural discussions, and influence product globalization strategies. Strong communication and collaboration skills, with ability to work across global teams and stakeholders. Knowledge of or interest in multilingual content, voice interfaces, and localized UX testing is a plus. Fluency in English; knowledge of another foreign language (Japanese, French, Chinese) is an added advantage. Collaborate cross-functionally, leadership & mentorship experience, drives go/no-go decisions based on quality of the release, anticipate risks and come up with mitigation plans. Data Privacy Notice for Job Candidates: For information on personal data processing, please see our Privacy Policy . Equal Employment Opportunity Employer (EEOE) Cohesity is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status or any other category protected by law. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, or are limited in the ability or unable to access or use this online application process and need an alternative method for for assistance. In-Office Expectations Cohesity employees who are within a reasonable commute (e.g. within a forty-five (45) minute average travel time) work out of our core offices 2-3 days a week of their choosing.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Hyderabad

Work from Office

About the Role As a Data Engineer, you will design, build, and enhance data pipelines and platforms supporting the One Platform/RDS team. You will be responsible for developing scalable data solutions, optimizing data workflows, and ensuring data quality and governance. You will collaborate with cross-functional teams to understand data requirements and deliver robust data infrastructure that supports analytics and business intelligence. You will also contribute to the continuous improvement of data engineering practices and mentor junior team members. About the Team Our team is the One Platform/RDS team, responsible for building and maintaining data platforms and services that support enterprise-wide data needs. We are passionate about data and technology, and we work closely with stakeholders across the organization to deliver high-quality data solutions. As part of a global team, you will have the opportunity to work with diverse technologies and contribute to strategic data initiatives. About You To succeed in this role, you will possess: 5+ years of experience in data engineering with strong proficiency in Python, SQL, Spark. Hands-on experience with cloud platforms such as Azure, AWS, or GCP, with a preference for Azure Data Services. Experience with Databricks, Delta Lake, and building ETL/ELT pipelines for large-scale data processing. Proficiency in data modeling, data warehousing, and performance tuning. Familiarity with data governance, data quality frameworks, and metadata management. Experience with containerization and orchestration tools such as Docker and Kubernetes. Strong understanding of CI/CD pipelines and DevOps practices for data engineering. Ability to collaborate with data scientists, analysts, and business stakeholders to deliver data solutions. Strong analytical and problem-solving skills with a focus on delivering business value. Excellent communication and presentation skills with proficiency in English. Understanding of insurance/reinsurance business is an advantage. Comfort working in a fast-paced, ambiguous environment with a focus on outcomes. About Swiss Re Swiss Re is one of the world s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords: Reference Code: 134820

Posted 1 week ago

Apply

5.0 - 8.0 years

30 - 35 Lacs

Chennai

Work from Office

We are seeking a skilled Backend Developer specializing in Java and Spring Boot to design, develop, and maintain scalable, high-performance backend services and APIs. As a key member of our engineering team, you will build robust backend solutions that support business-critical applications and ensure seamless integration with other systems and services. Qualifications: Education: Bachelor s or Master s degree in Computer Science, Information Systems, or related field. Experience: 5 8 years of hands-on experience in backend development with Java and Spring Boot. Technical Skills: Expertise in Java (Java 8+) and Spring Boot framework. Strong experience designing and building RESTful APIs and microservices architectures. Proven experience with Test-Driven Development (TDD) and automated testing frameworks. Knowledge of relational and NoSQL databases, ORM tools such as Hibernate. Familiarity with containerization (Docker), cloud platforms (preferably GCP), and infrastructure as code. Experience with security best practices (OAuth, JWT, encryption). Proficiency with CI/CD pipelines and code quality tools. Soft Skills: Strong problem-solving abilities, communication skills, and the ability to work collaboratively in a cross-functional team. Preferred: Experience with event-driven architecture and messaging systems (Pub Sub). Familiarity with API documentation tools like Swagger/OpenAPI. Knowledge of monitoring and logging tools Why Youll Excel: Impactful Work: Develop backend systems that drive core business operations and customer-facing services. Innovative Environment: Work with cutting-edge cloud and backend technologies in a forward-thinking, agile organization. Mentorship & Growth: Lead and grow within a talented team, sharing your expertise and learning continuously. Collaborative Culture: Work closely with product and engineering teams to deliver high-quality, scalable backend solutions using TDD and best practices. Key Responsibilities: Backend Solution Design & Development: Lead the design, development, and maintenance of RESTful APIs and microservices using Java and Spring Boot. Build scalable, secure, and performant backend systems to support complex business logic and data processing. Test-Driven Development (TDD): Apply TDD principles by writing automated unit and integration tests prior to development to ensure high code quality and reduce defects. Cloud & Infrastructure: Deploy and manage backend services on cloud platforms such as Google Cloud Platform (GCP) or similar, utilizing cloud-native technologies to ensure scalability and high availability. Cross-Functional Collaboration: Work closely with product owners, architects, and other engineering teams to understand requirements and design backend architectures that align with business goals. Quality & Testing: Develop and maintain automated test suites as part of continuous integration and delivery processes. Conduct code reviews to maintain quality standards. Mentorship & Knowledge Sharing: Guide and mentor junior backend developers on best practices for Java, Spring Boot, API design, TDD, and system optimization. Foster a culture of collaboration and continuous improvement. Agile Development: Actively participate in Agile ceremonies such as sprint planning, daily stand-ups, and retrospectives. Help the team adapt quickly to changing requirements and continuously improve development processes.

Posted 1 week ago

Apply

6.0 - 8.0 years

20 - 25 Lacs

Chennai

Work from Office

Are you ready to make an impact at DTCC Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional developmentAt DTCC, we are at the forefront of innovation in the financial markets. We are committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. Pay and Benefits: Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits, based on location Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact you will have in this role: At DTCC, the Observability team is at the forefront of ensuring the health, performance, and reliability of our critical systems and applications. We empower the organization with real-time visibility into infrastructure and business applications by leveraging cutting-edge monitoring, reporting, and visualization tools. Our team collects and analyzes metrics, logs, and traces using platforms like Splunk and other telemetry solutions. This data is essential for assessing application health and availability, and for enabling rapid root cause analysis when issues arise helping us maintain resilience in a fast-paced, high-volume trading environment. If youre passionate about observability, data-driven problem solving, and building systems that make a real-world impact, we d love to have you on our team. Primary Responsibilities: As a member of DTCC s Observability team, you will play a pivotal role in enhancing our monitoring and telemetry capabilities across critical infrastructure and business applications. Your responsibilities will include: Lead the migration from OpenText monitoring tools to Grafana and other open-source platforms. Design and deploy monitoring rules for infrastructure and business applications. Develop and manage alerting rules and notification workflows. Build real-time dashboards to visualize system health and performance. Configure and manage OpenTelemetry Collectors and Pipelines. Integrate observability tools with CI/CD, incident management, and cloud platforms. Deploy and manage observability agents across diverse environments. Perform upgrades and maintenance of observability platforms. Qualifications: Minimum of 6-8 years of related experience. Bachelors degree preferred or equivalent experience. Talent needed for success Proven experience designing intuitive, real-time dashboards (e.g., in Grafana) that effectively communicate system health, performance trends, and business KPIs. Expertise in defining and tuning monitoring rules, thresholds, and alerting logic to ensure accurate and actionable incident detection. Strong understanding of both application-level and operating system-level metrics, including CPU, memory, disk I/O, network, and custom business metrics. Experience with structured log ingestion, parsing, and analysis using tools like Splunk, Fluentd, or OpenTelemetry. Familiarity with implementing and analyzing synthetic transactions and real user monitoring to assess end-user experience and application responsiveness. Hands-on experience with application tracing tools and frameworks (e.g., OpenTelemetry, Jaeger, Zipkin) to diagnose performance bottlenecks and service dependencies. Proficiency in configuring and using AWS CloudWatch for collecting and visualizing cloud-native metrics, logs, and events. Understanding of containerized environments (e.g., Docker, Kubernetes) and how to monitor container health, resource usage, and orchestration metrics. Ability to write scripts or small applications in languages such as Python, Java, or Bash to automate observability tasks and data processing. Experience with automation and configuration management tools such as Ansible, Terraform, Chef, or SCCM to deploy and manage observability components at scale. Actual salary is determined based on the role, location, individual experience, skills, and other considerations. Please contact us to request accommodation.

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 7 Lacs

Bengaluru

Work from Office

JD for SAP Basis Role name: Consultant Role Description: Good to have HANA experience, Working knowledge in Linux operating system to handle SAP Basis and DB commands and administration Experience in performing different type of client copies and related activities Knowhow of SAP add-on upgrade, Support package upgrade SAP system installation and end to end provisioningInstalling SAP systems of Public cloud (Azure/GCP) with any DB is an added advantageExperience in handling public and private cloud administrationSN Responsibility of / Expectations from the Role 1 Creating, maintaining, testing, and debugging the entire back end of an application or system 2 Collaborate effectively with product development and customer-facing teams to enable robust automation and seamless operations 3 Handling the outages and scheduled activities during the shifts 4 Researching, developing, and implementing new tools and processes to improve and automate Operational Support5 Own and continuously improve our service offering Details For Candidate Briefing**(It should NOT contain any confidential information or references)About Client: (Please highlight domain, any ranking indicating size of /recognition for the client It helps sell the role to prospective candidates) SAP stands for Systems, Applications & Products in Data Processing while BASIS stands for Business Application Software Integrated Solution Basis administrators are involved in maintenance, system upgrade, setting up of system jobs, monitoring and analysing system logs and other administrative activitiesUSP of the Role and Project:(Why a candidate would be keen to apply for this role in this project ?what there for me? Please highlight the USP of the role in terms of nature of the project, technology involved, growth prospect, onsite opportunity, opportunity for learning and certifications etc It helps sell the role to prospective candidates) Candidates will be working in latest version of SAP platform and will be able to learn and grow their careers by performing multiple activites such as Provisioning and deployment of New system, Handling system down sinarios with outage management Subject Matter Expert (SME)**For faster and for better search and screening, it is important for Sourcing team to understand the technical experctation from the propective candidate Competencies: Digital : SAP BASIS - HANA Experience (Years): 6-8 Essential Skills: Must have HANA experience, Working knowledge in Linux operating system to handle SAP Basis and DB commands and administration Experience in performing different type of client copies and related activities Knowhow of SAP add-on upgrade, Support package upgrade SAP system installation and end to end provisioningInstalling SAP systems of Public cloud (Azure/GCP) with any DB is an added advantageExperience in handling public and private cloud administrationSN Responsibility of / Expectations from the Role 1 Creating, maintaining, testing, and debugging the entire back end of an application or system 2 Collaborate effectively with product development and customer-facing teams to enable robust automation and seamless operations 3 Handling the outages and scheduled activities during the shifts 4 Researching, developing, and implementing new tools and processes to improve and automate Operational Support5 Own and continuously improve our service offering Details For Candidate Briefing**(It should NOT contain any confidential information or references)About Client: (Please highlight domain, any ranking indicating size of /recognition for the client It helps sell the role to prospective candidates) SAP stands for Systems, Applications & Products in Data Processing while BASIS stands for Business Application Software Integrated Solution Basis administrators are involved in maintenance, system upgrade, setting up of system jobs, monitoring and analysing system logs and other administrative activitiesUSP of the Role and Project:(Why a candidate would be keen to apply for this role in this project ?what there for me? Please highlight the USP of the role in terms of nature of the project, technology involved, growth prospect, onsite opportunity, opportunity for learning and certifications etc It helps sell the role to prospective candidates) Candidates will be working in latest version of SAP platform and will be able to learn and grow their careers by performing multiple activites such as Provisioning and deployment of New system, Handling system down sinarios with outage management Subject Matter Expert (SME)**For faster and for better search and screening, it is important for Sourcing team to understand the technical experctation from the propective candidate Desirable Skills: Good to have HANA experience,Working knowledge in Linux operating system to handle SAP Basis and DB commands and administration Experience in performing different type of client copies and related activities Knowhow of SAP add-on upgrade, Support package upgrade SAP system installation and end to end provisioningInstalling SAP systems of Public cloud (Azure/GCP) with any DB is an added advantageExperience in handling public and private cloud administrationSN Responsibility of / Expectations from the Role 1 Creating, maintaining, testing, and debugging the entire back end of an application or system 2 Collaborate effectively with product development and customer-facing teams to enable robust automation and seamless operations 3 Handling the outages and scheduled activities during the shifts 4 Researching, developing, and implementing new tools and processes to improve and automate Operational Support5 Own and continuously improve our service offering Details For Candidate Briefing**(It should NOT contain any confidential information or references)About Client: (Please highlight domain, any ranking indicating size of /recognition for the client It helps sell the role to prospective candidates) SAP stands for Systems, Applications & Products in Data Processing while BASIS stands for Business Application Software Integrated Solution Basis administrators are involved in maintenance, system upgrade, setting up of system jobs, monitoring and analysing system logs and other administrative activitiesUSP of the Role and Project

Posted 1 week ago

Apply

5.0 - 7.0 years

6 - 10 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Join us in revolutionizing customer experiences with our client, a global leader in cloud contact center software. Our client brings the power of cloud innovation to enterprises worldwide, enabling businesses to deliver seamless, personalized, and delightful customer interactions. About the Project: This initiative is part of a next-generation digital engagement platform aimed at transforming how businesses connect with customers across multiple channels. The primary focus is the integration of Aqua, an advanced outbound communication solution, into our digital ecosystem. Aqua is widely used by healthcare providers, enterprises, and customer-centric organizations to deliver appointment reminders, test results, marketing campaigns, and personalized notifications while tracking user engagement in real time. The project is structured into three key phases: SMS channel integration, Email channel integration and WhatsApp channel integration. About the Team: We are assembling a dedicated Scrum team in India to collaborate closely with our 15-member Digital Team in Australia. To ensure smooth coordination and fast feedback loops, flexible working hours will be encouraged to create overlapping time with the Australian team. Responsibilities: Design, develop, and maintain Java Spring-based microservices deployed on Google Cloud Platform (GCP). Build and maintain RESTful APIs with a strong focus on scalability, reliability, and security. Develop integration layers for various communication channels including SMS, Email, and WhatsApp via third-party APIs. Optimize data processing and storage by leveraging GCP Datastore, BigQuery, and Cloud Storage (GCS buckets). Write efficient, reusable, and testable code adhering to best coding standards and design patterns (e.g., SOLID principles). Participate in code reviews, automated testing, and continuous integration pipelines to ensure high code quality and robustness. Participate in sprint planning, backlog refinement, and cross-team collaboration with the Australia-based digital team. 5+ experience with Java and Spring Framework for building scalable backend services. Proven expertise working with Google Cloud Platform (GCP) services, including Datastore, BigQuery, Cloud Storage (GCS), and Pub/Sub. Sol

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 12 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Job_Description":" This is a remote position. Overview: We are seeking an experienced Insurance Domain Expert to lead data migration projects within our organization. The ideal candidate will have a deep understanding of the insurance industry, data management principles, and hands-on experience in executing successful data migration initiatives. Key Responsibilities: 1. Industry Expertise: - Provide insights into best practices within the insurance domain to ensure compliance and enhance data quality. - Stay updated on regulatory changes affecting the insurance industry that may impact data processing and migration. 2. Data Migration Leadership: - Plan, design, and implement comprehensive data migration strategies to facilitate smooth transitions between systems. - Oversee the entire data migration process, including data extraction, cleaning, transformation, and loading (ETL / ELT). 3. Collaboration and Communication: - Liaise between technical teams and business stakeholders to ensure alignment of migration objectives with business goals. - Prepare and present progress reports and analytical findings to management and cross-functional teams. 4. Risk Management: - Identify potential data migration risks and develop mitigation strategies. - Conduct thorough testing and validation of migrated data to ensure accuracy and integrity. 5. Training and Support: - Train team members and clients on new systems and data handling processes post-migration. - Provide ongoing support and troubleshooting for data-related issues. Requirements Qualifications: - Bachelor\u2019s degree in information technology, Computer Science, or a related field; advanced degree preferred. - Minimum of 5-10 years of experience in the insurance domain with a focus on data migration projects. - Strong knowledge of insurance products, underwriting, claims, and regulatory requirements. - Proficient in data migration tools and techniques, with experience in ETL processes. - Excellent analytical and problem-solving skills with a keen attention to detail. - Strong communication and presentation skills to interact with various stakeholders. Benefits Diversity Inclusion: At Exavalu, we are committed to building a diverse and inclusive workforce. We welcome applications for employment from all qualified candidates, regardless of race, colour, gender, national or ethnic origin, age, disability, religion, sexual orientation, gender identity or any other status protected by applicable law. We foster a culture that values all individuals and promotes diverse perspectives, where you can make a meaningful impact and advance your career. Exavalu also promotes flexibility, depending on the needs of employees, customers, and the business. This may include part-time work, working outside normal 9-5 business hours, or working remotely. We also have a welcome back program to help people return to the mainstream after a long break due to health or family reasons. ","Job_Type":"Full time","

Posted 1 week ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Mumbai

Work from Office

Job Title: Principal Software Engineer Job Code: 10576 Country: IN City: Mumbai Skill Category: IT\Technology Description: Business Overview: The Wholesale Data & Operation Technology team in India is an integral part of the global team spread across all regions. The team is responsible to build and enhance Data Distribution Platform. This is a global team geographically across regions. We provide a 24/5 operational coverage to all regions across the globe. Position Specifications: Corporate Title Associate Functional Title Principal Software Engineer Experience 8 to 12 Years Qualification A Degree Requisition No. Role & Responsibilities: This is an Individual Contributor position. Were seeking an experienced Java Software Engineer to join our Wholesale Data and Operations Technology team. Youll be responsible for designing, developing, and maintaining our enterprise static and reference data distribution platform that handles over 300 million requests daily across our global infrastructure. Responsibilities: Design and implement scalable, highthroughput data processing systems capable of handling 300M+ daily requests with low latency requirements Optimize existing services to reduce response times and improve throughput in our distributed architecture Collaborate with crossfunctional teams to understand business requirements and translate them into technical solutions Develop and maintain RESTful APIs and microservices that power our data distribution platform Implement robust caching strategies to optimize data retrieval and system performance Participate in architectural discussions and contribute to technical design decisions Deliver high quality code within the committed deadlines, adhere to the best coding practices reducing technical debt Conduct code reviews and mentor junior developers on best practices Troubleshoot and resolve complex production issues, with a focus on performance optimization Work within an agile development environment, participating in sprint planning, standups, and retrospectives Collaborate with global team members across different time zones to ensure 24/7 system reliability Lead our technical migration from Java 8 to Java 17 (and eventually Java 21), leveraging new language features to improve code quality and performance Communicate effectively across technology and nontechnology stakeholders to drive solutions Learn and adopt evolving technology solutions to continue to deliver business value Skill Set: Strong experience in developing enterprisegrade highly scalable and fault tolerant distributed services using Java Utilize expert level knowledge of multithreading techniques to optimize systems performance Strong experience in architecting distributed caching solutions to improve data retrieval and system efficiency Experience in building application using DevOps principles Experienced with refactoring and reengineering existing platforms with advancement in technologies. Identify areas for improvement and innovation within the development process Java, Spring/Spring Boot, Hibernate, JPA, Micro service Architecture, REST Distributed Caching, Elastic Search or Solr, Radis or Gemfire (Any 2) React JS, HTML, JavaScript, CSS Microsoft SQL server, Sybase GitLab or GitStash, gitflow Jenkins, Ansible, Cloud Application Architecture, Kubernetes, CI/CD Event driven systems like Kafka Nomura Core Competencies: Competencies Behavioral Indicators Culture & Conduct Building Nomura s Culture Diversity & Inclusion Professional Integrity SelfAwareness Contributes to desired culture Sets positive example Aware of different values/styles Holds high standards of behaviour Aware of own strengths/weaknesses ClientCentricity & Business Acumen Commerciality ClientCentricity Analytical Thinking & Problem Solving Understands current market Anticipates client needs Pays attention to detail Sees problems, recommends solutions Strategy & Innovation Strategic Thinking & Change Decision Making & Judgment Agility Balances alternative views Knows when to decide/when to escalate Champions new ideas Is both disciplined and entrepreneurial Sees when to escalate Leadership & Collaboration Managing Talent Recognizing and Motivating Supporting, Developing & Collaborating with others Managing Conflict Thinks differently Balances alternative views Knows when/how to compromise Learns from experience Seeks to develop Communication & Connectivity Articulation & Receptiveness Impact Connectivity Assists in recruiting Gives credit Builds productive working relationships Provides constructive, timely and specific feedback Communication & Influence Articulation and Receptiveness Impact Connectivity Adjusts style to suit topic Balances listening/talking Communicates with clarity and consideration Is a proven and credible resource Questions to understand others views Builds internal contact network Willingly effectively works across teams Execution & Delivery Driving Performance ExecutionFocus Planning & Organizing Adaptability Demonstrates accountability/commitment Takes on challenging assignments Executes priority actions ontime Keeps stakeholders updated Manages expectations Persists when confronted with resistance

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Job Description About the team The Transfer Agency is a division responsible for Transaction Operations, Processing and associated functions of mutual funds for various clients. At FIS we provide service to clients via various channels like Transaction processing, Chat etc. The customer support may include but not limited to Accounts set up, Shareholder data maintenance, overall record keeping. What you will be doing Strategically focused and responsible for client satisfaction, maintaining client communication, the overall management of the client relationship, and the delivery of the outsourced solution. Serves as the primary management contact and client liaison during delivery of an outsourced solution, whether it is an IT solution or a business process outsourced solution and regardless of the client s geographic location. Maintains contact with client at an executive level, focusing on the strategic nature of the relationship. Represents the enterprise to the client and the client to the enterprise. Responsible for client satisfaction, maintaining client communication, overall management of the client relationship and delivery of outsourced solution. Oversees and leads teams in delivery of continuous and effective outsourced solutions and ensures project completion within budget and in accordance with contract requirements. Works to maintain and grow client relationships while ensuring ongoing customer service. Develops deep knowledge on FIS solutions and services provides thought leadership. Manages technical engagement on projects and is responsible for oversight of vendors and subcontractors. Leads teams in the delivery of outsourced solutions to the strategic client. Selects, develops and evaluates personnel to ensure efficient operation of the function. Identifies areas where continuous improvement can be applied, implements the change and measures the level of improvement May provide guidance and/or mentoring to less experienced Customer Service Associates - Consumer. Other related duties assigned as needed. What you Bring Minimum of 6 years of experience in technology/Financial services organizations. Proven knowledge to represent the enterprise s entire range of products to the client and of the industry Proven track record in client relationship management, service delivery and/or the sales of technology products and services Financial institution experience or comparable proven background with strong financial industry and data processing knowledge Broad understanding of the financial and strategic aspects of the business and participates in and/or establishes initiatives that contribute to the overall success of the enterprise; may also participate in initiatives that contribute to the overall success of the client s business Excellent negotiation and presentation skills that ensure contract renewals, a track record of product and revenue growth, and high levels of customer satisfaction Displays strong oral, written and interpersonal communication skills to effectively manage and/or implement all phases of projects and tasks within the enterprise and with its clients. Exhibits a high degree of initiative and analytical skills to handle and solve complex problems with minimal impact to the enterprise and the clients Viewed as an expert resource by peers and coworkers, maintains a good working relationship with both internal and client management and has a thorough internal working knowledge of the enterprise Demonstrates the ability to lead by example and motivate professional level staff Displays strong leadership qualities, decision making abilities, and strong business judgment Possesses strong personnel management skills Qualifications: Graduate (science/analytics preferable)/MBA Added bonus if you have: Certification in Delivery practice: PMI-PMP/SAFE Agile Transfer Agency Experience Delivery Management Experience. What we offer you Working in an international company, alongside international colleagues. Being a part of an innovative and entrepreneurial environment of a growing department and team. Option to work fully remotely, with the necessary equipment provided by the company (computer, monitors, accessories). Development opportunities by using the companys on-line training database and LinkedIn Learning. Unique working atmosphere (team integration meetings, friendly working environment, support of experienced employees). Opportunity to get involved in social projects and local initiatives. A broad range of professional education and personal development opportunities A work environment built on collaboration and respect

Posted 1 week ago

Apply

3.0 - 6.0 years

8 - 12 Lacs

Bengaluru

Work from Office

& Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. & Summary The ideal candidate will need to have 3 to 6 years of experience working in data engineering with a strong focus on utilizing Databricks for data processing and analytics. This role will involve designing, building, and maintaining scalable data pipelines and solutions. Responsibilities Develop and implement data solutions using Databricks to support analytics and reporting needs. Collaborate with data scientists and analysts to understand data requirements and deliver solutions. Optimize data processes and pipelines for performance and scalability. Ensure data quality and integrity across various data sources. Troubleshoot and resolve any issues related to data processing. Stay updated with the latest trends and best practices in data engineering and Databricks Qualifications Bachelor s degree in Computer Science, Information Technology, or a related field. 3 to 6 years of experience in data engineering or a similar role. Proven experience with Databricks in a production environment. Strong handson working knowledge in Azure Data Factory Strong knowledge of data processing frameworks such as Spark. Proficiency in programming languages such as Python, Scala. Handson proficient experience in SQL Experience with cloud platforms (e.g. AWS, Azure, or Google Cloud). Excellent problemsolving skills and attention to detail. Strong communication and collaboration skills. Mandatory skill sets Data Engineer, Databricks, Azure Data Factory, Python, Scala, SQL Preferred skill sets Cloud Platforms Years of experience required 3 to 8 years Education qualification Graduate Engineer or Management Graduate Education Degrees/Field of Study required Bachelor Degree, Master Degree Degrees/Field of Study preferred Required Skills Cloud Platform Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Travel Requirements Government Clearance Required?

Posted 1 week ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decisionmaking and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purposeled and valuesdriven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . Responsibilities 4 7 years of experience in Data Engineering or related roles. Handson experience in Microsoft Fabric Handson experience in Azure Databricks Proficiency in PySpark for data processing and scripting. Strong command over Python & SQL writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Hands on experience in performance tuning & optimization on Databricks & MS Fabric. Ensure alignment with overall system architecture and data flow. Understanding CI/CD practices in a data engineering context. Excellent problemsolving and communication skills. Exposure to BI tools like Power BI, Tableau, or Looker. Good to Have Experienced in Azure DevOps. Familiarity with data security and compliance in the cloud. Experience with different databases like Synapse, SQL DB, Snowflake etc. Mandatory skill sets Microsoft Fabric, Azure (Databricks & ADF), PySpark Preferred skill sets Microsoft Fabric, Azure (Databricks & ADF), PySpark Years of experience required 410 Education qualification Btech/MBA/MCA Education Degrees/Field of Study required Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred Required Skills Microsoft Azure Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Travel Requirements Government Clearance Required?

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 16 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

- Collect, organize, clean and preprocess data from various sources, including databases, spreadsheets, and APIs to ensure accuracy and reliability. - Perform exploratory data analysis to identify trends, patterns, and correlations. Required Candidate profile - 4+ years of experience working as Data Analyst or in similar role. - Proficiency in SQL, Python, R or other programming languages used for data analysis. - Experience with data visualization tools

Posted 1 week ago

Apply

0.0 - 3.0 years

1 - 2 Lacs

Noida

Work from Office

About Company: MahaVastu Remedies is a leading retail provider of Vastu Shastra solutions, offering high-quality products and expert guidance to create harmonious homes and workplaces. With a team of intuitive Acharyas, we transform spaces into hubs of success. We foster a collaborative and growth-oriented work environment, valuing innovation and employee development. Job Title: Data Entry Operator Location: MahaVastu Remedies , F-321, F Block, Sector 63, Noida, Uttar Pradesh, 201301 Key Responsibilities: Maintain and update daily order processing trackers using Excel/Google Sheets. Enter accurate data in Excel sheets to support order and inventory management. Coordinate with Warehouse, Logistics, and Customer Support teams for smooth order flow. Communicate with 3PL partners for managing dispatches and deliveries. Monitor and escalate issues related to order delays, product damage, or returns. Key Skills: Proficient in MS Excel / Google Sheets (VLOOKUP, Pivot Tables, etc.) Strong attention to detail and ability to manage large data sets accurately.

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have a strong background in data processing, cloud computing, and software development , with hands-on experience in Python, PySpark, Java , and Microsoft Azure . You will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support advanced analytics and data science initiatives. Key Responsibilities: Design, develop, and maintain robust data pipelines using PySpark , Python , and Java . Implement and manage data workflows on Microsoft Azure and other public cloud platforms . Collaborate with data scientists, analysts, and IT operations to ensure seamless data integration and availability. Optimize data systems for performance, scalability, and reliability. Ensure data quality, governance, and security across all data platforms. Support DevOps practices for continuous integration and deployment of data solutions. Monitor and troubleshoot data infrastructure and resolve system issues. Document processes and maintain data architecture standards. Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or related field. 3+ years of experience in data engineering , software development , or IT operations . Proficiency in Python , PySpark , and Java . Experience with cloud computing platforms, especially Microsoft Azure . Strong understanding of data management , data processing , and data analysis . Familiarity with multi-paradigm programming and modern software development practices. Knowledge of DevOps tools and methodologies. Experience with system administration and cloud providers . Excellent problem-solving and communication skills. Preferred Qualifications: Certifications in Azure , Python Data Science , or related technologies. Experience with public cloud environments like AWS or GCP. Familiarity with big data tools and frameworks. Exposure to data science workflows and tools. Skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have a strong background in data processing, cloud computing, and software development , with hands-on experience in Python, PySpark, Java , and Microsoft Azure . You will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support advanced analytics and data science initiatives. Key Responsibilities: Design, develop, and maintain robust data pipelines using PySpark , Python , and Java . Implement and manage data workflows on Microsoft Azure and other public cloud platforms . Collaborate with data scientists, analysts, and IT operations to ensure seamless data integration and availability. Optimize data systems for performance, scalability, and reliability. Ensure data quality, governance, and security across all data platforms. Support DevOps practices for continuous integration and deployment of data solutions. Monitor and troubleshoot data infrastructure and resolve system issues. Document processes and maintain data architecture standards. Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or related field. 3+ years of experience in data engineering , software development , or IT operations . Proficiency in Python , PySpark , and Java . Experience with cloud computing platforms, especially Microsoft Azure . Strong understanding of data management , data processing , and data analysis . Familiarity with multi-paradigm programming and modern software development practices. Knowledge of DevOps tools and methodologies. Experience with system administration and cloud providers . Excellent problem-solving and communication skills. Preferred Qualifications: Certifications in Azure , Python Data Science , or related technologies. Experience with public cloud environments like AWS or GCP. Familiarity with big data tools and frameworks. Exposure to data science workflows and tools.

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Senior Machine Learning Engineer - Recommender Systems Join our team at Thomson Reuters and contribute to the global knowledge economy. Our innovative technology influences global markets and supports professionals worldwide in making pivotal decisions. Collaborate with some of the brightest minds on diverse projects to craft next-generation solutions that have a significant impact. As a leader in providing intelligent information, we value the unique perspectives that foster the advancement of our business and your professional journey. Are you excited about the opportunity to leverage your extensive technical expertise to guide a development team through the complexities of full life cycle implementation at a top-tier companyOur Commercial Engineering team is eager to welcome a skilled Senior Machine Learning Engineer to our established global engineering group. We're looking for someone enthusiastic, an independent thinker, who excels in a collaborative environment across various disciplines, and is at ease interacting with a diverse range of individuals and technological stacks. This is your chance to make a lasting impact by transforming customer interactions as we develop the next generation of an enterprise-wide experience. About the Role: As a Senior Machine Learning Engineer, you will: Spearhead the development and technical implementation of machine learning solutions, including configuration and integration, to fulfill business, product, and recommender system objectives. Create machine learning solutions that are scalable, dependable, and secure. Craft and sustain technical outputs such as design documentation and representative models. Contribute to the establishment of machine learning best practices, technical standards, model designs, and quality control, including code reviews. Provide expert oversight, guidance on implementation, and solutions for technical challenges. Collaborate with an array of stakeholders, cross-functional and product teams, business units, technical specialists, and architects to grasp the project scope, requirements, solutions, data, and services. Promote a team-focused culture that values information sharing and diverse viewpoints. Cultivate an environment of continual enhancement, learning, innovation, and deployment. About You: You are an excellent candidate for the role of Senior Machine Learning Engineer if you possess: At least 5 years of experience in addressing practical machine learning challenges, particularly with Recommender Systems, to enhance user efficiency, reliability, and consistency. A profound comprehension of data processing, machine learning infrastructure, and DevOps/MLOps practices. A minimum of 2 years of experience with cloud technologies (AWS is preferred), including services, networking, and security principles. Direct experience in machine learning and orchestration, developing intricate multi-tenant machine learning products. Proficient Python programming skills, SQL, and data modeling expertise, with DBT considered a plus. Familiarity with Spark, Airflow, PyTorch, Scikit-learn, Pandas, Keras, and other relevant ML libraries. Experience in leading and supporting engineering teams. Robust background in crafting data science and machine learning solutions. A creative, resourceful, and effective problem-solving approach. #LI-FZ1 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Hyderabad

Work from Office

: Develop/enhance data warehousing functionality including the use and management of Snowflake data warehouse and the surrounding entitlements, pipelines and monitoring, in partnership with Data Analysts and Architects with guidance from lead Data Engineer. About the Role In this opportunity as Data Engineer, you will: Develop/enhance data warehousing functionality including the use and management of Snowflake data warehouse and the surrounding entitlements, pipelines and monitoring, in partnership with Data Analysts and Architects with guidance from lead Data Engineer Innovate with new approaches to meeting data management requirements Effectively communicate and liaise with other data management teams embedded across the organization and data consumers in data science and business analytics teams. Analyze existing data pipelines and assist in enhancing and re-engineering the pipelines as per business requirements. Bachelors degree or equivalent required, Computer Science or related technical degree preferred About You Youre a fit for the role if your background includes: Mandatory skills Data Warehousing, data models, data processing[ Good to have], SQL, Power BI / Tableau, Snowflake [good to have] , Python 3.5 + years of relevant experience in Implementation of data warehouse and data management of data technologies for large scale organizations Experience in building and maintaining optimized and highly available data pipelines that facilitate deeper analysis and reporting Worked on Analyzing data pipelines Knowledgeable about Data Warehousing, including data models and data processing Broad understanding of the technologies used to build and operate data and analytic systems Excellent critical thinking, communication, presentation, documentation, troubleshooting and collaborative problem-solving skills Beginner to intermediate Knowledge of AWS, Snowflake, Python Hands-on experience with programming and scripting languages Knowledge of and hands on experience with Data Vault 2.0 is a plus Also have experience in and comfort with some of the following skills/concepts: Good in writing SQL and performance tuning Data Integration tools lie DBT, Informatica, etc. Intermediate in programming language like Python/PySpark/Java/JavaScript AWS services and management, including Serverless, Container, Queueing and Monitoring services Consuming and building APIs. #LI-SM1 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 1 week ago

Apply

2.0 - 5.0 years

7 - 12 Lacs

Noida

Work from Office

6 months contract (extendable) Location: Noida Mode: Hybrid , candidate needs to travel 4 days in a month to office 2+ years of experience in Data Operations BGV is mandatory

Posted 1 week ago

Apply

0.0 - 4.0 years

2 - 6 Lacs

Gurugram

Work from Office

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, youll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. With a focus on digitization, innovation, and analytics, the Enterprise Digital teams creates central, scalable platforms and customer experiences to help markets across all of these priorities. Charter is to drive scale for the business and accelerate innovation for both immediate impact as well as long-term transformation of our business. A unique aspect of Enterprise Digital Teams is the integration of diverse skills across all its remit. Enterprise Digital Teams has a very broad range of responsibilities, resulting in a broad range of initiatives around the world. The American Express Enterprise Digital Experimentation & Analytics (EDEA) leads the Enterprise Product Analytics and Experimentation charter for Brand & Performance Marketing and Digital Acquisition & Membership experiences as well as Enterprise Platforms. The focus of this collaborative team is to drive growth by enabling efficiencies in paid performance channels & evolve our digital experiences with actionable insights & analytics. The team specializes in using data around digital product usage to drive improvements in the acquisition customer experience to deliver higher satisfaction and business value. About this Role: This role will report to the Manager of Membership experience analytics team within Enterprise Digital Experimentation & Analytics (EDEA) and will be based in Gurgaon. The candidate will be responsible for delivery of highly impactful analytics to optimize our Digital Membership Experiences across Web & App channels. Deliver strategic analytics focused on Digital Membership experiences across Web & App aimed at optimizing our Customer experiences Define and build key KPIs to monitor the acquisition journey performance and success Support the development of new products and capabilities Deliver read out of experiments uncovering insights and learnings that can be utilized to further optimize the customer journey Gain deep functional understanding of the enterprise-wide product capabilities and associated platforms over time and ensure analytical insights are relevant and actionable Power in-depth strategic analysis and provide analytical and decision support by mining digital activity data along with AXP closed loop data Minimum Qualifications Advanced degree in a quantitative field (e.g. Finance, Engineering, Mathematics, Computer Science) Strong programming skills are preferred. Some experience with Big Data programming languages (Hive, Spark), Python, SQL. Experience in large data processing and handling, understanding in data science is a plus. Ability to work in a dynamic, cross-functional environment, with strong attention to detail. Excellent communication skills with the ability to engage, influence, and encourage partners to drive collaboration and alignment. Preferred Qualifications Strong analytical/conceptual thinking competence to solve unstructured and complex business problems and articulate key findings to senior leaders/partners in a succinct and concise manner. Basic knowledge of statistical techniques for experimentation & hypothesis testing, regression, t-test, chi-square test.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies