Home
Jobs

4069 Hadoop Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 4.0 years

1 - 3 Lacs

Salem, Edappadi, Erode

Work from Office

Naukri logo

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications.

Posted 1 hour ago

Apply

1.0 - 4.0 years

1 - 3 Lacs

Puducherry, Mayiladuthurai, Karaikal

Work from Office

Naukri logo

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications.

Posted 1 hour ago

Apply

1.0 - 4.0 years

1 - 3 Lacs

Chidambaram, Mayiladuthurai, Cuddalore

Work from Office

Naukri logo

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications. Location: Mayiladuthurai,Chidambaram,Cuddalore,Tittakudi

Posted 1 hour ago

Apply

1.0 - 5.0 years

1 - 3 Lacs

Madurai, Dindigul, Oddanchatram

Work from Office

Naukri logo

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications.

Posted 1 hour ago

Apply

9.0 - 14.0 years

32 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Python, Golang, Rust, GCP, Airflow, Docker, Containerization, Hadoop, Hive, SQL, Spark, Generative AI, Agentic Workflows, Machine learning (ML) About the job Candidates for this position are preferred to be based in Bangalore, India and will be expected to comply with their team's hybrid work schedule requirements. We are seeking an experienced Senior Machine Learning Manager to lead the Notifications Science team, focused on building intelligent, ML-driven systems for personalized notifications. These systems ensure that we send the right message to the right customer, at the right time, through the right channel (Push, Email, SMS), and at the right cadence, while balancing incremental revenue with customer engagement health. In this role, youll be accountable for the technical roadmap, driving innovation to build the next generation of Wayfairs communications ML stack. Youll work closely with a high-performing team of ML scientists and engineers to solve some of Wayfairs most complex challenges in personalization, latency, and scale with direct impact on customer experience and company revenue. What Youll do: Own the strategy, roadmap, and execution of notification intelligence and automation solutions. Lead the development of GenAI-powered content generation, send-time optimization, and cross-channel orchestration systems. Build intelligent systems that drive significant incremental revenue while minimizing customer fatigue and unsubscribes. Develop and grow technical leadership within the team, modeling a culture of continuous research and innovation. Collaborate with Engineering and Product teams to scale decisioning systems to millions of notifications daily. Act as a subject matter expert, providing mentorship and technical guidance across the broader Data Science and Engineering organization. We Are a Match Because You Have: Bachelor's or Masters degree in Computer Science, Mathematics, Statistics, or related field. 9+ years of industry experience, with at least 12 years of experience managing teams, and 5+ years as an individual contributor working on production ML systems. Strategic thinker with a customer-centric mindset and a desire for creative problem-solving, looking to make a big impact in a growing organization. Demonstrated success influencing senior-level stakeholders on strategic direction, based on recommendations backed by in-depth analysis, and excellent written and verbal communication skills. Ability to partner cross-functionally to own and shape technical roadmaps and the organizations required to drive them. Proficient in one or more programming languages, e.g., Python, Golang, or Rust. Nice to have: Experience with GCP, Airflow, and containerization (Docker). Experience building scalable data processing pipelines with big data tools such as Hadoop, Hive, SQL, Spark, etc. Experience in Bayesian Learning, Multi-armed Bandits, or Reinforcement Learning. Familiarity with Generative AI and agentic workflows.

Posted 2 hours ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

Bengaluru

Hybrid

Naukri logo

About the Role Love deep data? Love discussing solutions instead of problems? Then you could be our next Data Scientist. In a nutshell, your primary responsibility will be enhancing the productivity and utilization of the generated data. Other things you will do are: Work closely with the business stakeholders Transform scattered pieces of information into valuable data Share and present your valuable insights with peers What You Will Do Develop models and run experiments to infer insights from hard data Improve our product usability and identify new growth opportunities Understand reseller preferences to provide them with the most relevant products Designing discount programs to help our resellers sell more Help resellers better recognize end-customer preferences to improve their revenue Use data to identify bottlenecks that will help our suppliers meet their SLA requirements Model seasonal demand to predict key organizational metrics Mentor junior data scientists in the team What You Will Need Bachelor's/Master's degree in computer science (or similar degrees) 2-4 years of experience as a Data Scientist in a fast-paced organization, preferably B2C Familiarity with Neural Networks, Machine Learning, etc. Familiarity with tools like SQL, R, Python, etc. Strong understanding of Statistics and Linear Algebra Strong understanding of hypothesis/model testing and ability to identify common model testing errors Experience designing and running A/B tests and drawing insights from them Proficiency in machine learning algorithms Excellent analytical skills to fetch data from reliable sources to generate accurate insights Experience in tech and product teams is a plus Bonus points for: Experience in working on personalization or other ML problems Familiarity with Big Data tech stacks like Apache Spark, Hadoop, Redshift, etc.

Posted 2 hours ago

Apply

1.0 - 3.0 years

6 - 9 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Naukri logo

POSITION Senior Data Engineer / Data Engineer Bangalore/Mumbai/Kolkata/Gurugra m/Hyd/Pune/Chennai LOCATION EXPERIENCE 2+ Years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. WHY SHOULD YOU JOIN US? With the agility of a start-up and the opportunities of an enterprise, every day at HashedIn, your work will make an impact that matters. So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion, collaboration, and high performance HashedIn is the place to be! From learning to leadership, this is your chance to take your software engineering career to the next level. So, what impact will you make? Visit us @ https://hashedin.com JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business- critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high- quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: ¢ ¢ ¢ Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: ¢ Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). ¢ Implement efficient solutions for high-volume, batch, real-time streaming, and event- driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and ¢ ¢ ¢ ¢ ¢ ¢ ¢ observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. ¢ Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code ¢ ¢ ¢ reviews, technical discussions, and peer mentoring as needed. Skills & Experience: ¢ Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services ¢ for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 ¢ Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). ¢ Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). ¢ ¢ Strong SQL development skills for ETL, analytics, and performance optimization. ¢ Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. ¢ ¢ ¢ ¢ ¢ Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. ¢ Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: ¢ ¢ ¢ ¢ Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: ¢ ¢ ¢ ¢ Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: ¢ ¢ ¢ ¢ Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus. © HASHEDIN BY DELOITTE 2025

Posted 3 hours ago

Apply

8.0 - 10.0 years

15 - 18 Lacs

Pune

Work from Office

Naukri logo

Hiring a Solution Architect with 810 yrs experience in data, AI & GenAI. Must have strong cloud (AWS/Azure/GCP), LLM, ML (TensorFlow/PyTorch), and full-stack skills. Role involves designing scalable architectures and leading technical teams.

Posted 3 hours ago

Apply

7.0 - 12.0 years

12 - 22 Lacs

Pune, Chennai, Bengaluru

Work from Office

Naukri logo

Dear Candidate, Greetings of the day!!! Location- Bangalore, Hyderabad , Pune and Chennai Experience- 3.5 years to 13 years Short JD- Job Description Key skills- Spark or Pyspark or Scala (Any big data skill is fine) All skills are good to have. Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than 3-5) 1. Minimum 3-12 years of experience in build & deployment of Bigdata applications using SparkSQL, SparkStreaming in Python; 2. Minimum 2 years of extensive experience in design, build and deployment of Python-based applications; 3. Design and develop ETL integration patterns using Python on Spark. Develop framework for converting existing PowerCenter mappings and to PySpark(Python and Spark) Jobs. Expertise on graph algorithms and advanced recursion techniques. Hands-on experience in generating/parsing XML, JSON documents, and REST API request/responses. Good-to-Have Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities.

Posted 3 hours ago

Apply

4.0 - 9.0 years

0 - 1 Lacs

Hyderabad, Gurugram, Bengaluru

Hybrid

Naukri logo

The position is suited for individuals who have strong PySpark programming skills and have demonstrated ability to work effectively in a fast paced, high volume, deadline driven environment. Education and Experience Education: B.Tech/M.Tech/MCA/MS/MBA Experience in design and implementation of Big Data systems using PySpark, database migration, transformation and integration solutions for any Data warehousing project. Required Skills Must have excellent knowledge in Apache Spark and Python programming experience Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code, knowledge of scheduling tools like Airflow, Control-M etc. is preferred Working experience on Cloud technology architecture like AWS ecosystem, Google Cloud, BigQuery etc. is an added advantage Understanding of Unix/Linux + Shell Scripting Data modelling experience using advanced statistical analysis,unstructured data processing Experience with building APIs for provisioning data to downstream systems by leveraging different frameworks. Hands on project experience on Jupyter notebook/ Zeppelin/ PyCharm etc. IDEs Hands on experience with AWS S3 Filesystem operations Good knowledge of Hadoop, Hive and Cloudera/ Hortonworks Data Platform Experience handling CDC operations for huge volume of data Should understand and have operating experience with Agile delivery methodologies Should have hands-on experience in the following: data validation, writing unit test cases Should have experience in integrating PySpark with downstream and upstream applications through a batch/real-time interface Should have experience in fine tuning process and troubleshooting performance issues Should have demonstrated expertise in development of design documents like HLD, LLD etc. Should have experience in leading requirements gathering and developing solution architecture for Data migration/integration initiatives Should have experience in handling client interactions at different phases of the projects Should have experience in leading a team in a project or a module Should be well versed with onsite/offshore model and its challenges Preferred Skills Exposure to any ETL/Reporting tool (Informatica, Jasper, QlikView, Tableau) is desirable Exposure to Jenkins or equivalent CICD tool & Git repository is preferred Design & Develop AI/ML model using PySpark on cloud environment

Posted 3 hours ago

Apply

5.0 - 7.0 years

15 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking a highly skilled GCP Data Engineer with experience in designing and developing data ingestion frameworks, real-time processing solutions, and data transformation frameworks using open-source tools. The role involves operationalizing open-source data-analytic tools for enterprise use, ensuring adherence to data governance policies, and performing root-cause analysis on data-related issues. The ideal candidate should have a strong understanding of cloud platforms, especially GCP, with hands-on expertise in tools such as Kafka, Apache Spark, Python, Hadoop, and Hive. Experience with data governance and DevOps practices, along with GCP certifications, is preferred.

Posted 4 hours ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Kolkata, Chennai, Bengaluru

Hybrid

Naukri logo

Global Gen AI Developer Enabling a software-defined, electrified future. Visteon is a technology company that develops and builds innovative digital cockpit and electrification products at the leading-edge of the mobility revolution. Founded in 2000, Visteon brings decades of automotive intelligence combined with Silicon Valley speed to apply global insights that help transform the software-defined vehicle of the future for many of the worlds largest OEMs. The company employs 10,000 employees in 18 countries around the globe. To know more about us click here. Mission of the Role: Facilitate Enterprise machine learning and artificial intelligence solutions using the latest technologies Visteon is adopting globally. Key Objectives of this Role: The primary goal of the Global ML/AI Developer is to leverage advanced machine learning and artificial intelligence techniques to develop innovative solutions that drive Visteons strategic initiatives. By collaborating with cross-functional teams and stakeholders, this role identifies opportunities for AI-driven improvements, designs and implements scalable ML models, and integrates these models into existing systems to enhance operational efficiency. Following development best practices, fostering a culture of continuous learning, and staying abreast of AI advancements, the Global ML/AI Developer ensures that all AI solutions align with organizational goals, support data-driven decision-making, and continuously improve Visteons technological capabilities. Qualification, Experience and Skills: 6-8 Yrs Technical Skills: Expertise in machine learning frameworks (e.g., TensorFlow, PyTorch), programming languages (e.g., Python, R, SQL), and data processing tools (e.g., Apache Spark, Hadoop). Proficiency in developing, training, and deploying ML models, including supervised and unsupervised learning, deep learning, and reinforcement learning. Strong understanding of data engineering concepts, including data preprocessing, feature engineering, and data pipeline development. Experience with cloud platforms (preferably Microsoft Azure) for deploying and scaling ML solutions. Business Acumen : Strong business analysis and ability to translate complex technical concepts into actionable business insights and recommendations. Key Behaviors: Innovation: Continuously seeks out new ideas, technologies, and methodologies to improve AI/ML solutions and drive the organization forward. Attention to Detail: Pays close attention to all aspects of the work, ensuring accuracy and thoroughness in data analysis, model development, and documentation. Effective Communication: Clearly and effectively communicates complex technical concepts to non-technical stakeholders, ensuring understanding and alignment across the organization.

Posted 5 hours ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

JD (Job Description) for Solutions Architect for Provider 360 – C2 role About EXL Health Payments Analytics At EXL Health Payments Analytics Center of Excellence, we are looking for passionate individuals with growth/startup mindset to experiment, fail fast, learn and contribute to our 5 fold growth story of $200M to $1B EXL is considered Special investigation Unit by 6 of top 10 US health insurance companies (~1/3rd US healthcare data is handled by us) helping with error/overpayment detection of the hospital/doctor claims. Unlike typical services and consulting companies we make our revenue from the savings we identify for the client(Commission/Outcome basis). We Productize algorithms and R&D accelerators that are intended to be used across multiple health insurance clients for the above business case. So Expect An Ecosystem That Has Massive Data Assets (Millions of structured data and thousands of unstructured records processed monthly) Tech investment (On Prem GPUs, Azure, AWS, Databricks, On Prem- Hadoop-Hive, Hive etc) Leadership push to Digitization, Data-led decisions and AI 100+ members Analytics team of data enthusiasts, decision scientists and Business/Subject matter experts. Our Typical Day Monitoring business performance and operations – Problem solve by applying the different analytics levers or involving different teams doing -- ML models, SQL rules, Hospital Profiling, Pattern Mining etc to meet client savings target. The Analytics teams acts as the R&D and Operational excellence team who constantly find new patterns through all the state of art libraries, technologies from SQL queries to LLM agents. About The Role We are looking for a Self driven Analytics Consultant to join our team of data and domain enthusiasts in Health Care payment integrity. you will get an opportunity to work with various payers and providers and get to know how we reduce provider abrasion and help provider engagement with our innovative and highly scalable solutions. Responsibilities Ø Ability to design data driven solutions and Frameworks (Descriptive and Predictive) from scratch & consult in a leadership capacity on potential Solutions/Storyboards and POCs Ø Drives business metrics that add to the top-line and/or profitability Ø Perform quantitative and qualitative analysis like (r aw) data analysis, data deep-dives etc. to acquire insights from data Ø Develops Descriptive (reporting) through to Prescriptive Analytics for business monitoring and operational excellence Ø Translates data insights with business stakeholders to communicate and get equivalent business context Ø Applies next-gen technology to all parts of the analytics lifecycle from data extraction, exploratory data analysis, Data Mining and information extraction from unstructured data to visualization & story boarding Ø Manages a small team of data analyst Skillsets >7 yrs. of experience in the field of Strategy and Business Optimization background Post Graduate or MBA (preferred) OR Graduates in Engineering / Mathematics / Operations Research / Science / Statistics Experience in the healthcare industry is preferred. With atleast 7+ years’ experience in analytics using SQL,SAS, Python, basic statistical concepts and analyzing data & interpreting results to the business Ability to translate and structure business problems to deliver technical solutions Proven experience working in a fast-paced environment supporting multiple concurrent projects Collaborative and team player Desire to work within a fast-paced environment Ability to work in a team environment and be flexible in taking on various projects

Posted 5 hours ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

GlassDoor logo

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Location Type: In-person Work Location: In person

Posted 6 hours ago

Apply

4.0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

Role: Senior Data Scientist Location: Gurgaon, Haryana (Hybrid) Contract to Hire Experience: 4+ years About the Role: We are seeking a highly analytical and technically skilled Data Scientist to join our team, you will leverage advanced analytics, statistical modeling, and machine learning to uncover insights and help solve complex business challenges. Key Responsibilities: Design and develop advanced, statistically effective algorithms for solving high-dimensional problems. Apply statistical and data mining techniques such as hypothesis testing, machine learning, text mining, and predictive modeling to analyze trends and generate insights. Create visualizations and figures to clearly communicate analytical findings. Collaborate closely with clients and stakeholders to understand business needs and deliver actionable insights. Evaluate and integrate new datasets and technologies into the existing analytical platform. Contribute to the continuous improvement of analytical processes and tools. Required Qualifications: 4+ years of relevant industry experience in data analytics, statistical modeling, or related fields. Strong academic background with coursework emphasizing analytical and quantitative skills. Proficiency in at least one programming language such as Java, Python, or R. Hands-on experience with platforms and tools like the Hadoop ecosystem, Amazon Web Services (AWS), or other database systems. Familiarity with big data concepts and advanced analytical methods (e.g., recommender systems, social listening, etc.). Excellent communication skills and fluency in English. Preferred Skills: Experience working in cross-functional teams. Strong problem-solving and critical-thinking abilities. Ability to work in a fast-paced, dynamic environment. Job Types: Full-time, Permanent, Contractual / Temporary Schedule: Day shift Monday to Friday Morning shift Work Location: In person

Posted 6 hours ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Greetings from TCS!!! TCS is hiring for Snowflake Tech Architect / Tech Lead Experience: 10+ years Location: Chennai/Bangalore/Mumbai Required Technical Skill Set 10 years of total experience and at least 3+ years of expertise in Cloud data warehouse technologies on Snowflake and AWS, Azure or GCP. At least one End-to-end Snowflake implementation is a must covering all aspects including architecture, design, data engineering, data visualization and data governance (specifically data quality and lineage). Significant experience with data migrations and development of Operational Data Stores, Enterprise Data Warehouses and Data Marts. Good hands-on knowledge on SQL and Data Warehousing life cycle is an absolute requirement. Significant experience with data migrations and development, design, Operational Data Stores, Enterprise Data Warehouses and Data Marts. Experience with cloud ETL and ELT in one of the tools like DBT/Glue/ADF or Matillion or any other ELT tool and exposure to Bigdata ecosystem (Hadoop). Expertise with at least one of the Traditional data warehouses solutions on Oracle, Teradata, and Microsoft SQL server. Excellent communication skills to liaise with Business & IT stakeholders. Expertise in planning execution of a project and efforts estimation. Understanding of Data Vault, data mesh and data fabric architecture patterns. Exposure to working in Agile ways of working. Must-Have Experience on cloud services like S3/Blob/GCS, Lambda, Glue/ADF/, Apache Airflow. Experience or understanding of Banking and Financial Services business domain. Good-to-Have Experience in coding languages like Python and PySpark would be an added advantage. Experience in DevOps, CI/CD, GitHub is a big plus. Responsibility of / Expectations from the Role 1.Provide Technical pre-sales enablement on data on cloud aspects covering data architecture, data engineering, data modelling, data consumption and data governance aspects focusing on Snowflake. 2.Expert level knowledge on Snowflake Data engineering, performance, consumption, security, governance and admin aspects 3.To work with cross-functional teams in onsite/offshore setup and discuss and solve technical problems with various stakeholders including customer teams. 4.Creating technical proposals and responding to large scale RFPs. 5.Have discussion on existing solution, design/optimize solution and prepare execution planning for development, deployment and enabling end users to utilize the data platform. 6.Role demands excellent oral and written communication skills to organize workshops, meetings with account teams, account leadership and senior stakeholders from client including CXO level. 7.Adept in creating POV and conduct PoC. 8.Liaise with Technology partners like Snowflake, Matillion, DBT etc.,

Posted 6 hours ago

Apply

5.0 - 6.0 years

5 - 10 Lacs

India

On-site

GlassDoor logo

Job Summary: We are seeking a highly skilled Python Developer to join our team. Key Responsibilities: Design, develop, and deploy Python applications Work independently on machine learning model development, evaluation, and optimization. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Schedule: Fixed shift Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 07/07/2025

Posted 6 hours ago

Apply

4.0 years

0 Lacs

Pune

On-site

GlassDoor logo

Role: Data Scientist Location: Pune, Maharashtra (Hybrid) Contract to Hire Experience: 4+ years About the Role: We are seeking a highly analytical and technically skilled Data Scientist to join our team, you will leverage advanced analytics, statistical modeling, and machine learning to uncover insights and help solve complex business challenges. Key Responsibilities: Design and develop advanced, statistically effective algorithms for solving high-dimensional problems. Apply statistical and data mining techniques such as hypothesis testing, machine learning, text mining, and predictive modeling to analyze trends and generate insights. Create visualizations and figures to clearly communicate analytical findings. Collaborate closely with clients and stakeholders to understand business needs and deliver actionable insights. Evaluate and integrate new datasets and technologies into the existing analytical platform. Contribute to the continuous improvement of analytical processes and tools. Required Qualifications: 4+ years of relevant industry experience in data analytics, statistical modeling, or related fields. Strong academic background with coursework emphasizing analytical and quantitative skills. Proficiency in at least one programming language such as Java, Python, or R. Hands-on experience with platforms and tools like the Hadoop ecosystem, Amazon Web Services (AWS), or other database systems. Familiarity with big data concepts and advanced analytical methods (e.g., recommender systems, social listening, etc.). Excellent communication skills and fluency in English. Preferred Skills: Experience working in cross-functional teams. Strong problem-solving and critical-thinking abilities. Ability to work in a fast-paced, dynamic environment. Job Types: Full-time, Permanent, Contractual / Temporary Schedule: Day shift Monday to Friday Morning shift Work Location: In person

Posted 6 hours ago

Apply

3.0 years

3 - 7 Lacs

Pune

Remote

GlassDoor logo

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About the Team Join our team and experience Workday! / About the team It's fun to work in a company where people truly believe in what they're doing. At Workday, we're committed to bringing passion and customer focus to the business of enterprise applications. We work hard, and we're serious about what we do. But we like to have a good time, too. In fact, we run our company with that principle in mind every day: One of our core values is fun. About the Role Job Description / About the Role Workday is looking for a Support Engineer specializing in Analytics with expertise in troubleshooting, performance optimization, and data analysis across Workday’s analytics services, including Prism Analytics, People Analytics, Discovery Boards, and Accounting Center. The ideal candidate has a solid foundation in big-data processing, data transformation, and reporting frameworks, with the ability to diagnose and resolve complex issues by analyzing logs, performance metrics, and system integrations. This role requires hands-on experience with query performance tuning, data pipeline debugging, and structured troubleshooting methodologies to support Workday’s analytics solutions. Strong data modeling, log analysis, and problem-solving skills combined with clear, effective communication are essential for success in this role. About You Key Areas of Responsibility: Provide sophisticated technical support for Workday’s reporting and analytics tools, including Prism Analytics, People Analytics, Discovery Boards, and Accounting Center, focusing on performance optimization, index debugging, memory management, and system health debugging. Develop expertise in Workday analytics services to drive high-performance reporting and data analytics solutions, using Prism Analytics, People Analytics, and SQL best practices. Collaborate with clients to define business requirements and translate them into optimized reports and configurations, improving query performance, data accuracy, and system health using Prism Analytics and Discovery Boards. Troubleshoot and resolve issues related to report configurations, system performance, integrations, and memory management, including detailed analysis of logs, query performance, and data pipelines. Guide customers in building, modifying, and optimizing reports, ensuring scalability, data integrity, and alignment with business needs, especially in Prism Analytics and Accounting Center. Educate users on standard methodologies for Workday reporting, security, and data governance, emphasizing People Analytics and Discovery Boards. Collaborate cross-functionally with engineering teams to address data quality issues, security concerns, and performance optimizations across Prism Analytics and Accounting Center, with a focus on memory management and system health. Contribute to documentation, QA efforts, and the optimization of analytics tools, with a focus on SQL querying, indexing, and debugging system health issues. Participate in 24x7 global support coverage, providing timely and efficient support across time zones. Key Technical Skills & Knowledge: Bachelor’s degree in Computer Science, Information Management, Statistics, Data Science, or a related field. 3+ years of experience in customer support, system performance optimization, data analysis, or similar roles, with a solid background in big data technologies and AI-driven analytics. Demonstrable experience with data platforms (e.g., Spark, Hadoop) and working with large-scale datasets, including data pipeline design and distributed processing. Hands-on experience with advanced reporting tools and analytics solutions, including AI-powered reporting platforms and big data tools like Spark for data transformation and analysis. Strong proficiency in SQL and data querying with the ability to analyze complex data sets, optimize queries, and perform data-driven insights to enhance system performance and business processes. Demonstrated ability to gather and map business requirements to advanced analytics and application capabilities, ensuring alignment with AI-driven insights and reporting solutions. Solid understanding of data architecture, including data lakes, ETL processes, and real-time data streaming. Strong analytical skills to collect, organize, and interpret complex datasets, using AI and big data tools to drive product improvements and optimize reporting performance. Ability to deliver data-driven insights to technical and non-technical partners, presenting complex findings to end-users and executive teams in an actionable manner. Proven collaboration skills, working across teams to drive issue resolution and using AI or machine learning models to enhance system functionality and customer experience. Strong written and verbal communication skills, with experience in technical consulting, customer support, or AI/ML-driven technical roles. Self-motivated with the ability to work independently in a fast-paced environment, while using AI and big data technologies to identify and resolve issues. Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!

Posted 6 hours ago

Apply

6.0 years

2 - 5 Lacs

Pune

On-site

GlassDoor logo

Our team members are at the heart of everything we do. At Cencora, we are united in our responsibility to create healthier futures, and every person here is essential to us being able to deliver on that purpose. If you want to make a difference at the center of health, come join our innovative company and help us improve the lives of people and animals everywhere. Apply today! Job Details Cencora is looking for a Mid-Level SQL Server Database Developer to join our Data Warehouse Team in our 3rd Party Logistics Division. Working closely with the Data Warehouse, Business Intelligence, EDI and Account Management teams; lessons learned from this activity will provide a foundation for growth for your career goals and exciting opportunities to take our operations farther with exciting technologies and methods. If you love a fast-paced and challenging work environment with many future world-wide support opportunities, you may be our ideal candidate. Shift : 02:00 PM to 11:00 PM IST PRIMARY DUTIES AND RESPONSIBILITIES: Build out new code management, release, and control procedures Troubleshoot SSIS package & SQL job failures Setup new inbound and outbound file processing requests Possess strong data analysis skills and an investigative mindset to troubleshoot and resolve issues by analyzing data and examining code in depth Be highly skilled in debugging and understanding existing code bases in T-SQL to connect the dots and resolve complex issues Develop centralized performance and security monitoring methods Design and implement High Availability and Disaster Recovery Solutions Hands-on experience in Microsoft SQL Server installation, configuration, performance tuning, maintenance, and Database Administration on Production Servers Maintain backup & recovery plans Participate in on-call rotation schedule Perform multiple Windows Server and SQL upgrades/migrations Work with supporting vendors, database owners, infrastructure teams Work with Windows environment for better SQL server compliance Contribute to new cloud platform choices Be well-organized and focused with good communication skills REQUIREMENTS: 6+ years - SQL Server – T-SQL 4+ years - SSIS Development and support 4+ years - SQL Server Administration 4+ years - Windows Server Administration 4+ years - Data Warehouse Environment One of the following PowerShell 3+ Years C# 3+ Years Nice To Have: 3rd Party Logistics Experience is a Major Plus PowerShell AS400, RPG Knowledge Windows Server Administration Azure . Experience & Educational Requirements: Bachelor’s Degree in Computer Science, Information Technology or any other related discipline or equivalent related experience. 2+ years of directly-related or relevant experience, preferably in software designing and development. Preferred Certifications: Android Development Certification Microsoft Asp.Net Certification Microsoft Certified Engineer Application/Infrastructure/Enterprise Architect Training and Certification, e.g. TOGAF Certified Scrum Master SAFe Agile Certification DevOps Certifications like AWS Certified DevOps Engineer Skills & Knowledge: Behavioral Skills: Critical Thinking Detail Oriented Interpersonal Communication Learning Agility Problem Solving Time Management Technical Skills: API Design Cloud Computing Methodologies Integration Testing & Validation Programming/Coding Database Management Software Development Life Cycle (SDLC) Technical Documentation Web Application Infrastructure Web Development Frameworks Tools Knowledge: Cloud Computing Tools like AWS, Azure, Google cloud Container Management and Orchestration Tools Big Data Frameworks like Hadoop Java Frameworks like JDBC, Spring, ORM Solutions, JPA, JEE, JMS, Gradle, Object Oriented Design Microsoft Office Suite NoSQL Database Platforms like MongoDB, BigTable, Redis, RavenDB Cassandra, HBase, Neo4j, and CouchDB Programming Languages like JavaScript, HTML/CSS, Python, SQL Operating Systems & Servers like Windows, Linux, Citrix, IBM, Oracle, SQL What Cencora offers Benefit offerings outside the US may vary by country and will be aligned to local market practice. The eligibility and effective date may differ for some benefits and for team members covered under collective bargaining agreements. Full time Affiliated Companies Affiliated Companies: CENCORA BUSINESS SERVICES INDIA PRIVATE LIMITED Equal Employment Opportunity Cencora is committed to providing equal employment opportunity without regard to race, color, religion, sex, sexual orientation, gender identity, genetic information, national origin, age, disability, veteran status or membership in any other class protected by federal, state or local law. The company’s continued success depends on the full and effective utilization of qualified individuals. Therefore, harassment is prohibited and all matters related to recruiting, training, compensation, benefits, promotions and transfers comply with equal opportunity principles and are non-discriminatory. Cencora is committed to providing reasonable accommodations to individuals with disabilities during the employment process which are consistent with legal requirements. If you wish to request an accommodation while seeking employment, please call 888.692.2272 or email hrsc@cencora.com. We will make accommodation determinations on a request-by-request basis. Messages and emails regarding anything other than accommodations requests will not be returned

Posted 6 hours ago

Apply

4.0 years

6 - 9 Lacs

Pune

On-site

GlassDoor logo

Every day, Global Payments makes it possible for millions of people to move money between buyers and sellers using our payments solutions for credit, debit, prepaid and merchant services. Our worldwide team helps over 3 million companies, more than 1,300 financial institutions and over 600 million cardholders grow with confidence and achieve amazing results. We are driven by our passion for success and we are proud to deliver best-in-class payment technology and software solutions. Join our dynamic team and make your mark on the payments technology landscape of tomorrow. Summary of This Role Works throughout the software development life cycle and performs in a utility capacity to create, design, code, debug, maintain, test, implement and validate applications with a broad understanding of a variety of languages and architectures. Analyzes existing applications or formulate logic for new applications, procedures, flowcharting, coding and debugging programs. Maintains and utilizes application and programming documents in the development of code. Recommends changes in development, maintenance and system standards. Creates appropriate deliverables and develops application implementation plans throughout the life cycle in a flexible development environment. What Part Will You Play? Develops basic to moderately complex code using front and / or back end programming languages within multiple platforms as needed in collaboration with business and technology teams for internal and external client software solutions. Designs, creates, and delivers routine to moderately complex program specifications for code development and support on multiple projects/issues with a wide understanding of the application / database to better align interactions and technologies. Analyzes, modifies, and develops moderately complex code/unit testing in order to develop concise application documentation. Performs testing and validation requirements for moderately complex code changes. Performs corrective measures for moderately complex code deficiencies and escalates alternative proposals. Participates in client facing meetings, joint venture discussions, vendor partnership teams to determine solution approaches. Provides support to leadership for the design, development and enforcement of business / infrastructure application standards to include associated controls, procedures and monitoring to ensure compliance and accuracy of data. Applies a full understanding of procedures, methodology and application standards to include Payment Card Industry (PCI) security compliance. Conducts and provides basic billable hours and resource estimates on initiatives, projects and issues. Assists with on-the-job training and provides guidance to other software engineers. What Are We Looking For in This Role? Minimum Qualifications BS in Computer Science, Information Technology, Business / Management Information Systems or related field Typically minimum of 4 years - Professional Experience In Coding, Designing, Developing And Analyzing Data. Typically has an advanced knowledge and use of one or more front / back end languages / technologies and a moderate understanding of the other corresponding end language / technology from the following but not limited to; two or more modern programming languages used in the enterprise, experience working with various APIs, external Services, experience with both relational and NoSQL Databases. Preferred Qualifications BS in Computer Science, Information Technology, Business / Management Information Systems or related field 6+ years professional Experience In Coding, Designing, Developing And Analyzing Data and experience with IBM Rational Tools What Are Our Desired Skills and Capabilities? Skills / Knowledge - A seasoned, experienced professional with a full understanding of area of specialization; resolves a wide range of issues in creative ways. This job is the fully qualified, career-oriented, journey-level position. Job Complexity - Works on problems of diverse scope where analysis of data requires evaluation of identifiable factors. Demonstrates good judgment in selecting methods and techniques for obtaining solutions. Networks with senior internal and external personnel in own area of expertise. Supervision - Normally receives little instruction on day-to-day work, general instructions on new assignments. Operating Systems: Linux distributions including one or more for the following: Ubuntu, CentOS/RHEL, Amazon Linux Microsoft Windows z/OS Tandem/HP-Nonstop Database - Design, familiarity with DDL and DML for one or more of the following databases Oracle, MySQL, MS SQL Server, IMS, DB2, Hadoop Back-end technologies - Java, Python, .NET, Ruby, Mainframe COBOL, Mainframe Assembler Front-end technologies - HTML, JavaScript, jQuery, CICS Web Frameworks – Web technologies like Node.js, React.js, Angular, Redux Development Tools - Eclipse, Visual Studio, Webpack, Babel, Gulp Mobile Development – iOS, Android Machine Learning – Python, R, Matlab, Tensorflow, DMTK Global Payments Inc. is an equal opportunity employer. Global Payments provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex (including pregnancy), national origin, ancestry, age, marital status, sexual orientation, gender identity or expression, disability, veteran status, genetic information or any other basis protected by law. If you wish to request reasonable accommodations related to applying for employment or provide feedback about the accessibility of this website, please contact jobs@globalpay.com.

Posted 6 hours ago

Apply

0 years

4 - 9 Lacs

Chennai

On-site

GlassDoor logo

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Data Engineer is a seasoned subject matter expert, responsible for the transformation of data into a structured format that can be easily analyzed in a query or report. This role is responsible for developing structured data sets that can be reused or complimented by other data sets and reports. This role analyzes the data sources and data structure and designs and develops data models to support the analytics requirements of the business which includes management / operational / predictive / data science capabilities. Key responsibilities: Creates data models in a structured data format to enable analysis thereof. Designs and develops scalable extract, transformation and loading (ETL) packages from the business source systems and the development of ETL routines to populate data from sources, Participates in the transformation of object and data models into appropriate database schemas within design constraints. Interprets installation standards to meet project needs and produce database components as required. Creates test scenarios and be responsible for participating in thorough testing and validation to support the accuracy of data transformations. Accountable for running data migrations across different databases and applications, for example MS Dynamics, Oracle, SAP and other ERP systems. Works across multiple IT and business teams to define and implement data table structures and data models based on requirements. Accountable for analysis, and development of ETL and migration documentation. Collaborates with various stakeholders to evaluate potential data requirements. Accountable for the definition and management of scoping, requirements, definition, and prioritization activities for small-scale changes and assist with more complex change initiatives. Collaborates with various stakeholders, contributing to the recommendation of improvements in automated and non-automated components of the data tables, data queries and data models. To thrive in this role, you need to have: Seasoned knowledge of the definition and management of scoping requirements, definition and prioritization activities. Seasoned understanding of database concepts, object and data modelling techniques and design principles and conceptual knowledge of building and maintaining physical and logical data models. Seasoned expertise in Microsoft Azure Data Factory, SQL Analysis Server, SAP Data Services, SAP BTP. Seasoned understanding of data architecture landscape between physical and logical data models Analytical mindset with excellent business acumen skills. Problem-solving aptitude with the ability to communicate effectively, both written and verbal. Ability to build effective relationships at all levels within the organization. Seasoned expert in programing languages (Perl, bash, Shell Scripting, Python, etc.). Academic qualifications and certifications: Bachelor's degree or equivalent in computer science, software engineering, information technology, or a related field. Relevant certifications preferred such as SAP, Microsoft Azure etc. Certified Data Engineer, Certified Professional certification preferred. Required experience: Seasoned experience as a data engineering, data mining within a fast-paced environment. Proficient in building modern data analytics solutions that delivers insights from large and complex data sets with multi-terabyte scale. Seasoned experience with architecture and design of secure, highly available and scalable systems. Seasoned proficiency in automation, scripting and proven examples of successful implementation. Seasoned proficiency using scripting language (Perl, bash, Shell Scripting, Python, etc.). Seasoned experience with big data tools like Hadoop, Cassandra, Storm etc. Seasoned experience in any applicable language, preferably .NET. Seasoned proficiency in working with SAP, SQL, MySQL databases and Microsoft SQL. Seasoned experience working with data sets and ordering data through MS Excel functions, e.g. macros, pivots. Workplace type : Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.

Posted 6 hours ago

Apply

5.0 years

7 - 10 Lacs

Chennai

On-site

GlassDoor logo

The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Person Specification Knowledge/Experience: 5 to 8 years of experience in Software development Expertise in building ETL applications using Ab Initio. Good knowledge of RDBMS – Oracle, with ability to write complex SQL needed to investigate and analyse data issues 3years of experience in big data ecosystem Hadoop, Spark & Hive will be strong plus. Good in UNIX shell scripting. Hands on experience in Autosys/Control Centre scheduling tools. Proven 5+ years of experience in working with complex data warehouses. Strong influencing & interpersonal skills Willing to work flexible hours Skills: Strong design and execution bend of mind Candidate should possess a strong work ethic, good interpersonal, communication skills, and a high energy level. Candidate should share many common traits: Analytical thinker, quick learner who is capable of organizing and structuring information effectively; Ability to prioritize and manage schedules under tight, fixed deadlines. Excellent written and verbal communication skills. Ability to build relationships at all levels. Ability to independently work with vendors in resolving issues and developing solutions Strong interpersonal skills Qualifications: Bachelor of Science or Master degree in Computer Science or Engineering or related discipline. Competencies: Strong work organization and prioritization capabilities. Takes ownership and accountability for assigned work. Ability to manage multiple activities. Focused and determined in getting the job done right. Ability to identify and manage key risks and issues. Shows drive, integrity, sound judgment, adaptability, creativity, self-awareness and an ability to multitask and prioritize. Good change management discipline - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 6 hours ago

Apply

5.0 years

15 - 24 Lacs

Bengaluru

On-site

GlassDoor logo

Job Title: Senior Data Engineer – Azure | ADF | Databricks | PySpark | AWS Location: Bangalore, Hyderabad, Chennai (Hybrid Mode) Experience Required: 5+ Years Notice Period: Immediate Job Description We are looking for a Senior Data Engineer who is passionate about designing and developing scalable data pipelines, optimizing data architecture, and working with advanced big data tools and cloud platforms. This is a great opportunity to be a key player in transforming data into meaningful insights by leveraging modern data engineering practices on Azure, AWS, and Databricks . You will be working with cross-functional teams including data scientists, analysts, and software engineers to deliver robust data solutions. The ideal candidate will be technically strong in Azure Data Factory, PySpark, Databricks, and AWS services , and will have experience in building end-to-end ETL workflows and driving business impact through data. Key Responsibilities Design, build, and maintain scalable and reliable data pipelines and ETL workflows Implement data ingestion and transformation using Azure Data Factory (ADF) and Azure Databricks (PySpark) Work across multiple data platforms including Azure, AWS, Snowflake, and Redshift Collaborate with data scientists and business teams to understand data needs and deliver solutions Optimize data storage, processing, and retrieval for performance and cost-effectiveness Develop data quality checks and monitoring frameworks for pipeline health Ensure data governance, security, and compliance with industry standards Lead code reviews, set data engineering standards, and mentor junior team members Propose and evaluate new tools and technologies for continuous improvement Must-Have Skills Strong programming skills in Python , SQL , or Scala Azure Data Factory , Azure Databricks , Synapse Analytics Hands-on with PySpark , Spark, Hadoop, Hive Experience with cloud platforms (Azure preferred; AWS/GCP acceptable) Data Warehousing: Snowflake , Redshift , BigQuery Strong ETL/ELT pipeline development experience Workflow orchestration tools such as Airflow , Prefect , or Luigi Excellent problem-solving, debugging, and communication skills Nice to Have Experience with real-time streaming tools (Kafka, Flink, Spark Streaming) Exposure to data governance tools and regulations (GDPR, HIPAA) Familiarity with ML model integration into data pipelines Containerization and CI/CD exposure: Docker, Git, Kubernetes (basic) Experience with Vector databases and unstructured data handling Technical Environment Programming: Python, Scala, SQL Big Data Tools: Spark, Hadoop, Hive Cloud Platforms: Azure (ADF, Databricks, Synapse), AWS (S3, Glue, Lambda), GCP Data Warehousing: Snowflake, Redshift, BigQuery Databases: PostgreSQL, MySQL, MongoDB, Cassandra Orchestration: Apache Airflow, Prefect, Luigi Tools: Git, Docker, Azure DevOps, CI/CD pipelines Soft Skills Strong analytical thinking and problem-solving abilities Excellent verbal and written communication Collaborative team player with leadership qualities Self-motivated, organized, and able to manage multiple projects Education & Certifications Bachelor’s or Master’s Degree in Computer Science, IT, Engineering, or equivalent Cloud certifications (e.g., Microsoft Azure Data Engineer, AWS Big Data) are a plus Key Result Areas (KRAs) Timely delivery of high-performance data pipelines Quality of data integration and governance compliance Business team satisfaction and data readiness Proactive optimization of data processing workloads Key Performance Indicators (KPIs) Pipeline uptime and performance metrics Reduction in overall data latency Zero critical issues in production post-release Stakeholder satisfaction score Number of successful integrations and migrations Job Types: Full-time, Permanent Pay: ₹1,559,694.89 - ₹2,441,151.11 per year Benefits: Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Application Question(s): What is your notice period in days? Experience: Azure Data Factory, Azure Databricks, Synapse Analytics: 5 years (Required) Python, SQL, or Scala: 5 years (Required) Work Location: In person

Posted 6 hours ago

Apply

0 years

0 Lacs

Bengaluru

On-site

GlassDoor logo

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant -Power Platforms Application Engineer! Seeking an experienced and highly skilled Power Platforms Application Engineer to join our team. In this role, you will be responsible for developing, implementing, and optimizing Power platform applications for various projects in Genpact, ensuring seamless integration, user experience, and overall performance. Responsibilities Develop applications using Microsoft Power Platforms to meet business needs. Customize and extend Power Apps, Power Automate, and Power BI solutions. Integrate applications with other Microsoft services and third-party systems. Utilize Microsoft Power Virtual Agent (Copilot Studio) to create intelligent chatbots and virtual agents. Collaborate with team members to ensure seamless delivery of solutions. Collaborate with cross-functional teams, including data scientists, machine learning engineers, and product managers, to design and develop applications, and custom software tailored to specific business needs. Should have experience in developing with Microsoft Power Platforms. Should have strong understanding of application lifecycle management in Power Apps. Should be proficient in scripting languages like JavaScript. Should have familiarity with integrating Power Platforms with Azure services. Experience with Microsoft Power Virtual Agent (Copilot Studio). Excellent communication and teamwork skills. Apply full stack development skills, including frontend and backend technologies, to design, develop, and optimize applications and custom software, ensuring a seamless integration of machine learning models, user interfaces, and backend systems. Implement best practices for software development, including version control, testing, and continuous integration/continuous deployment (CI/CD). Develop and maintain technical documentation, including application design specifications, user guides, and support materials. Qualifications we seek in you: Minimum qualifications Bachelor’s or Master’s degree in computer science, Engineering, or a related field. Excellent problem-solving, analytical, and critical thinking skills. Strong communication and collaboration skills, with the ability to work effectively with cross functional teams. Preferred Qualifications/ skills Knowledge of cloud computing platforms, such as AWS, and Google Cloud Platform, is a plus. Experience with big data technologies, such as Hadoop, Hive, or Presto, is a plus. Familiarity with machine learning frameworks, such as TensorFlow or PyTorch , is a plus. Strong analytical and problem-solving skills, as well as excellent communication and collaboration abilities. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Principal Consultant Primary Location India-Bangalore Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 22, 2025, 11:52:48 PM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 6 hours ago

Apply

Exploring Hadoop Jobs in India

The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.

Average Salary Range

The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.

Career Path

In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.

Related Skills

In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.

Interview Questions

  • What is Hadoop and how does it work? (basic)
  • Explain the difference between HDFS and MapReduce. (medium)
  • How do you handle data skew in Hadoop? (medium)
  • What is YARN in Hadoop? (basic)
  • Describe the concept of NameNode and DataNode in HDFS. (medium)
  • What are the different types of join operations in Hive? (medium)
  • Explain the role of the ResourceManager in YARN. (medium)
  • What is the significance of the shuffle phase in MapReduce? (medium)
  • How does speculative execution work in Hadoop? (advanced)
  • What is the purpose of the Secondary NameNode in HDFS? (medium)
  • How do you optimize a MapReduce job in Hadoop? (medium)
  • Explain the concept of data locality in Hadoop. (basic)
  • What are the differences between Hadoop 1 and Hadoop 2? (medium)
  • How do you troubleshoot performance issues in a Hadoop cluster? (advanced)
  • Describe the advantages of using HBase over traditional RDBMS. (medium)
  • What is the role of the JobTracker in Hadoop? (medium)
  • How do you handle unstructured data in Hadoop? (medium)
  • Explain the concept of partitioning in Hive. (medium)
  • What is Apache ZooKeeper and how is it used in Hadoop? (advanced)
  • Describe the process of data serialization and deserialization in Hadoop. (medium)
  • How do you secure a Hadoop cluster? (advanced)
  • What is the CAP theorem and how does it relate to distributed systems like Hadoop? (advanced)
  • How do you monitor the health of a Hadoop cluster? (medium)
  • Explain the differences between Hadoop and traditional relational databases. (medium)
  • How do you handle data ingestion in Hadoop? (medium)

Closing Remark

As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies