Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
The ideal candidate ready to join immediately can share their details via email for quick processing at nitin.patil@ust.com. Act swiftly for immediate attention! With over 5 years of experience, the successful candidate will have the following roles and responsibilities: - Designing, developing, and maintaining scalable data pipelines using Spark (PySpark or Spark with Scala). - Constructing data ingestion and transformation frameworks for both structured and unstructured data sources. - Collaborating with data analysts, data scientists, and business stakeholders to comprehend requirements and deliver reliable data solutions. - Handling large volumes of data while ensuring quality, integrity, and consistency. - Optimizing data workflows for enhanced performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. - Implementing data quality checks and automation for ETL/ELT pipelines. - Monitoring and troubleshooting data issues in production environments and conducting root cause analysis. - Documenting technical processes, system designs, and operational procedures. Key Skills Required: - Minimum 3 years of experience as a Data Engineer or in a similar role. - Proficiency with PySpark or Spark using Scala. - Strong grasp of SQL for data querying and transformation purposes. - Previous experience working with any cloud platform (AWS, Azure, or GCP). - Sound understanding of data warehousing concepts and big data architecture. - Familiarity with version control systems like Git. Desired Skills: - Exposure to data orchestration tools such as Apache Airflow, Databricks Workflows, or equivalent. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools like Docker/Kubernetes. - Experience with CI/CD practices and familiarity with DevOps principles. - Understanding of data governance, security, and compliance standards.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
We are seeking an experienced and forward-thinking Lead Data Engineer to spearhead the development of scalable, secure, and high-performance data solutions. You must possess in-depth technical knowledge in Python, Apache Spark, Delta Lake, and orchestration tools like Databricks Workflows or Azure Data Factory. Your expertise should also include a solid understanding of data governance, metadata management, and regulatory compliance within the insurance and financial services sectors. Proficiency in developing Python applications, Spark-based workflows, utilizing Delta Lake, and orchestrating jobs with Databricks Workflows or Azure Data Factory is essential. Furthermore, you should be able to incorporate retention metadata, business rules, and data governance policies into reusable pipelines and have a strong grasp of data privacy, security, and regulatory requirements in insurance and finance. As the Lead Data Engineer, your responsibilities will include designing and architecting end-to-end data engineering solutions across cloud platforms, creating and managing robust data pipelines and ETL workflows using Python and Apache Spark, and implementing scalable Delta Lake solutions for structured and semi-structured data. You will also be tasked with orchestrating complex workflows using Databricks Workflows or Azure Data Factory, translating business rules and data governance policies into modular pipeline components, ensuring compliance with data privacy and security standards, and mentoring junior data engineers to promote coding best practices, testing, and deployment efficiency. Collaboration with cross-functional teams such as data architects, analysts, and business stakeholders to align data solutions with business objectives is crucial, as is driving performance optimization, cost-efficiency, and innovation in data engineering practices. Key Qualifications: - Minimum of 8 years of experience in data engineering, with at least 2 years in a lead or architect position. - Expertise in Python, Apache Spark, and Delta Lake at an advanced level. - Strong familiarity with Databricks Workflows and/or Azure Data Factory. - Deep comprehension of data governance, metadata management, and integration of business rules. - Proven track record of implementing data privacy, security, and regulatory compliance in insurance or financial domains. - Strong leadership, communication, and stakeholder management skills. - Experience working with cloud platforms such as Azure, AWS, or GCP. Preferred Qualifications: - Background in CI/CD pipelines and DevOps practices within data engineering. - Knowledge of data cataloging and data quality tools. - Certifications in Azure Data Engineering or related technologies. - Exposure to enterprise data architecture and modern data stack tools.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark, specifically PySpark or Spark with Scala. Your role will involve building data ingestion and transformation frameworks for various structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders is essential to understand requirements and deliver reliable data solutions. Working with large volumes of data, you will ensure quality, integrity, and consistency while optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. Implementation of data quality checks and automation for ETL/ELT pipelines is a critical aspect of this role. Monitoring and troubleshooting data issues in production, along with performing root cause analysis, will be part of your responsibilities. Additionally, documenting technical processes, system designs, and operational procedures will be necessary. The ideal candidate for this position should have at least 3+ years of experience as a Data Engineer or in a similar role. Hands-on experience with PySpark or Spark using Scala is required, along with a strong knowledge of SQL for data querying and transformation. Experience working with any cloud platform (AWS, Azure, or GCP) and a solid understanding of data warehousing concepts and big data architecture are essential. Familiarity with version control systems like Git is also a must-have skill. In addition to the must-have skills, it would be beneficial to have experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar. Knowledge of Delta Lake, HDFS, or Kafka, familiarity with containerization tools (Docker/Kubernetes), exposure to CI/CD practices and DevOps principles, and an understanding of data governance, security, and compliance standards are considered good-to-have skills. If you meet the above requirements and are ready to join immediately, please share your details via email to nitin.patil@ust.com for quick processing. Act fast for immediate attention!,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
kolkata, west bengal
On-site
Candidates who are ready to join immediately can share their details via email for quick processing to nitin.patil@ust.com. Act fast for immediate attention! With over 5 years of experience, the ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines using Spark, either PySpark or Spark with Scala. They will also be tasked with building data ingestion and transformation frameworks for structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions is a key aspect of the role. The candidate will work with large volumes of data to ensure quality, integrity, and consistency, optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. Implementation of data quality checks and automation for ETL/ELT pipelines, as well as monitoring and troubleshooting data issues in production, are also part of the responsibilities. Documentation of technical processes, system designs, and operational procedures will be essential. Must-Have Skills: - At least 3 years of experience as a Data Engineer or in a similar role. - Hands-on experience with PySpark or Spark using Scala. - Strong knowledge of SQL for data querying and transformation. - Experience working with any cloud platform (AWS, Azure, or GCP). - Solid understanding of data warehousing concepts and big data architecture. - Experience with version control systems like Git. Good-to-Have Skills: - Experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools (Docker/Kubernetes). - Exposure to CI/CD practices and DevOps principles. - Understanding of data governance, security, and compliance standards.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
The ideal candidate ready to join immediately can share their details via email for quick processing at nitin.patil@ust.com. Act fast for immediate attention! With over 5 years of experience, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark (PySpark or Spark with Scala). You will also build data ingestion and transformation frameworks for structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions will be a key aspect of the role. Working with large volumes of data, ensuring quality, integrity, and consistency, and optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms (AWS, Azure, or GCP) are essential responsibilities. Additionally, implementing data quality checks and automation for ETL/ELT pipelines, monitoring and troubleshooting data issues in production, and performing root cause analysis will be part of your duties. You will also be expected to document technical processes, system designs, and operational procedures. Must-Have Skills: - Minimum 3 years of experience as a Data Engineer or in a similar role. - Hands-on experience with PySpark or Spark using Scala. - Strong knowledge of SQL for data querying and transformation. - Experience working with any cloud platform (AWS, Azure, or GCP). - Solid understanding of data warehousing concepts and big data architecture. - Familiarity with version control systems like Git. Good-to-Have Skills: - Experience with data orchestration tools such as Apache Airflow, Databricks Workflows, or similar. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools like Docker/Kubernetes. - Exposure to CI/CD practices and DevOps principles. - Understanding of data governance, security, and compliance standards.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
The ideal candidate for this position should have at least 5 years of experience and must be ready to join immediately. In this role, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark, specifically PySpark or Spark with Scala. You will also be tasked with building data ingestion and transformation frameworks for structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions is a key aspect of this role. Working with large volumes of data to ensure quality, integrity, and consistency is crucial. Additionally, optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP is a significant part of the responsibilities. Implementing data quality checks and automation for ETL/ELT pipelines, monitoring and troubleshooting data issues in production, and performing root cause analysis are also essential tasks. Documentation of technical processes, system designs, and operational procedures is expected. The must-have skills for this role include at least 3 years of experience as a Data Engineer or in a similar role, hands-on experience with PySpark or Spark using Scala, strong knowledge of SQL for data querying and transformation, experience working with any cloud platform (AWS, Azure, or GCP), a solid understanding of data warehousing concepts and big data architecture, and experience with version control systems like Git. Good-to-have skills for this position include experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar, knowledge of Delta Lake, HDFS, or Kafka, familiarity with containerization tools such as Docker/Kubernetes, exposure to CI/CD practices and DevOps principles, and an understanding of data governance, security, and compliance standards. If you meet the qualifications and are interested in this exciting opportunity, please share your details via email at nitin.patil@ust.com for quick processing. Act fast to grab this immediate attention!,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Sr. Data Analytics Engineer at Ajmera Infotech Private Limited (AIPL) in Bengaluru/Bangalore, you will play a crucial role in building planet-scale software for NYSE-listed clients, enabling decisions that are critical and must not fail. With 5-9 years of experience, you will join a 120-engineer team specializing in highly regulated domains such as HIPAA, FDA, and SOC 2. The team delivers production-grade systems that transform data into a strategic advantage. You will have the opportunity to make end-to-end impact by building full-stack analytics solutions ranging from lake house pipelines to real-time dashboards. Fail-safe engineering practices such as TDD, CI/CD, DAX optimization, Unity Catalog, and cluster tuning will be part of your daily routine. You will work with a modern stack including technologies like Databricks, PySpark, Delta Lake, Power BI, and Airflow. As part of a mentorship culture, you will lead code reviews, share best practices, and grow as a domain expert. You will operate in a mission-critical context, helping enterprises migrate legacy analytics into cloud-native, governed platforms with a compliance-first mindset in HIPAA-aligned environments. Your key responsibilities will include building scalable pipelines using SQL, PySpark, and Delta Live Tables on Databricks, orchestrating workflows with Databricks Workflows or Airflow, designing dimensional models with Unity Catalog and Great Expectations validation, delivering robust Power BI solutions, migrating legacy SSRS reports to Power BI, optimizing compute and cost, and collaborating cross-functionally to convert product analytics needs into resilient BI assets. To excel in this role, you must have 5+ years of experience in analytics engineering, with at least 3 years in production Databricks/Spark contexts. Advanced skills in SQL, PySpark, Delta Lake, Unity Catalog, and Power BI are essential. Experience in SSRS-to-Power BI migration, Git, CI/CD, cloud platforms like Azure/AWS, and strong communication skills are also required. Nice-to-have skills include certifications like Databricks Data Engineer Associate, experience with streaming pipelines, data quality frameworks like dbt and Great Expectations, familiarity with BI platforms like Tableau and Looker, and cost governance knowledge. Ajmera offers competitive compensation, flexible hybrid schedules, and a deeply technical culture where engineers lead the narrative. If you are passionate about building reliable, audit-ready data products and want to take ownership of systems from raw ingestion to KPI dashboards, apply now and engineer insights that matter.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark (PySpark or Spark with Scala). Your role will involve building data ingestion and transformation frameworks for structured and unstructured data sources. Collaborating with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions will be a key aspect of your responsibilities. Additionally, you will work with large volumes of data to ensure quality, integrity, and consistency, optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. Implementing data quality checks and automation for ETL/ELT pipelines, monitoring and troubleshooting data issues in production, and documenting technical processes, system designs, and operational procedures are also part of your duties. To excel in this role, you should have at least 3 years of experience as a Data Engineer or in a similar role. Hands-on experience with PySpark or Spark using Scala is essential, along with a strong knowledge of SQL for data querying and transformation. You should also have experience working with any cloud platform (AWS, Azure, or GCP), a solid understanding of data warehousing concepts and big data architecture, and familiarity with version control systems like Git. While not mandatory, it would be beneficial to have experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar, knowledge of Delta Lake, HDFS, or Kafka, familiarity with containerization tools such as Docker or Kubernetes, exposure to CI/CD practices and DevOps principles, and an understanding of data governance, security, and compliance standards. If you are ready to join immediately and possess the required skills and experience, please share your details via email at nitin.patil@ust.com. Act fast for immediate attention!,
Posted 6 days ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant- Databricks Developer with experience in Unity Catalog + Python , Spark , Kafka for ETL ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities . Develop and maintain scalable ETL pipelines using Databricks with a focus on Unity Catalog for data asset management. Implement data processing frameworks using Apache Spark for large-scale data transformation and aggregation. Integrate real-time data streams using Apache Kafka and Databricks to enable near real-time data processing. Develop data workflows and orchestrate data pipelines using Databricks Workflows or other orchestration tools. Design and enforce data governance policies, access controls, and security protocols within Unity Catalog . Monitor data pipeline performance, troubleshoot issues, and implement optimizations for scalability and efficiency. Write efficient Python scripts for data extraction, transformation, and loading. Collaborate with data scientists and analysts to deliver data solutions that meet business requirements. Maintain data documentation, including data dictionaries, data lineage, and data governance frameworks. Qualifications we seek in you! Minimum qualifications . Bachelor&rsquos degree in Computer Science , Data Engineering, or a related field. . experience in data engineering with a focus on Databricks development. . Proven expertise in Databricks, Unity Catalog , and data lake management. . Strong programming skills in Python for data processing and automation. . Experience with Apache Spark for distributed data processing and optimization. . Hands -on experience with Apache Kafka for data streaming and event processing. . Proficiency in SQL for data querying and transformation. . Strong understanding of data governance, data security, and data quality frameworks. . Excellent communication skills and the ability to work in a cross-functional environ . Must have experience in Data Engineering domain . . Must have implemented at least 2 project end-to-end in Databricks. . Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration . Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. . Must have good understanding to create complex data pipeline . Must have good knowledge of Data structure & algorithms. . Must be strong in SQL and sprak-sql . . Must have strong performance optimization skills to improve efficiency and reduce cost. . Must have worked on both Batch and streaming data pipeline. . Must have extensive knowledge of Spark and Hive data processing framework. . Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. . Must be strong in writing unit test case and integration test . Must have strong communication skills and have worked on the team of size 5 plus . Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications . Good to have Unity catalog and basic governance knowledge. . Good to have Databricks SQL Endpoint understanding. . Good To have CI/CD experience to build the pipeline for Databricks jobs. . Good to have if worked on migration project to build Unified data platform. . Good to have knowledge of DBT. . Good to have knowledge of docker and Kubernetes. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
12.0 - 14.0 years
20 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Please Note - High NP will not considered (Only Immediate joiners) Skilled and motivated Azure Databricks Data Engineer to join our dynamic team. The ideal candidate will have strong experience with Python, Spark programming, and expertise inbuilding and optimizing data pipelines in Azure Databricks. You will play a pivotal role in leveraging Databricks workflows, Databricks Asset Bundles, and CI/CD pipelines using GitHub to deliver high-performance data solutions. A solid understanding of Data Warehousing and Data Mart architecture in Databricks is critical for success in this role. If youre passionate about data engineering, cloud technologies, and scalable data architecture.
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Consultant- Databricks Developer with experience in Unity Catalog + Python , Spark , Kafka for ETL ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities . Develop and maintain scalable ETL pipelines using Databricks with a focus on Unity Catalog for data asset management. Implement data processing frameworks using Apache Spark for large-scale data transformation and aggregation. Integrate real-time data streams using Apache Kafka and Databricks to enable near real-time data processing. Develop data workflows and orchestrate data pipelines using Databricks Workflows or other orchestration tools. Design and enforce data governance policies, access controls, and security protocols within Unity Catalog . Monitor data pipeline performance, troubleshoot issues, and implement optimizations for scalability and efficiency. Write efficient Python scripts for data extraction, transformation, and loading. Collaborate with data scientists and analysts to deliver data solutions that meet business requirements. Maintain data documentation, including data dictionaries, data lineage, and data governance frameworks. Qualifications we seek in you! Minimum qualifications . Bachelor&rsquos degree in Computer Science , Data Engineering, or a related field. . experience in data engineering with a focus on Databricks development. . Proven expertise in Databricks, Unity Catalog , and data lake management. . Strong programming skills in Python for data processing and automation. . Experience with Apache Spark for distributed data processing and optimization. . Hands -on experience with Apache Kafka for data streaming and event processing. . Proficiency in SQL for data querying and transformation. . Strong understanding of data governance, data security, and data quality frameworks. . Excellent communication skills and the ability to work in a cross-functional environ . Must have experience in Data Engineering domain . . Must have implemented at least 2 project end-to-end in Databricks. . Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration . Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. . Must have good understanding to create complex data pipeline . Must have good knowledge of Data structure & algorithms. . Must be strong in SQL and sprak-sql . . Must have strong performance optimization skills to improve efficiency and reduce cost. . Must have worked on both Batch and streaming data pipeline. . Must have extensive knowledge of Spark and Hive data processing framework. . Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. . Must be strong in writing unit test case and integration test . Must have strong communication skills and have worked on the team of size 5 plus . Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications . Good to have Unity catalog and basic governance knowledge. . Good to have Databricks SQL Endpoint understanding. . Good To have CI/CD experience to build the pipeline for Databricks jobs. . Good to have if worked on migration project to build Unified data platform. . Good to have knowledge of DBT. . Good to have knowledge of docker and Kubernetes. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France