Jobs
Interviews

1038 Dataflow Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Kochi, Kerala, India

On-site

Role Description Key Responsibilities Design, develop, and optimize ETL pipelines using PySpark on Google Cloud Platform (GCP). Work with BigQuery, Cloud Dataflow, Cloud Composer (Apache Airflow), and Cloud Storage for data transformation and orchestration. Develop and optimize Spark-based ETL processes for large-scale data processing. Implement best practices for data governance, security, and monitoring in a cloud environment. Collaborate with data engineers, analysts, and business stakeholders to understand data requirements. Troubleshoot performance bottlenecks and optimize Spark jobs for efficient execution. Automate data workflows using Apache Airflow or Cloud Composer. Ensure data quality, validation, and consistency across pipelines. 5+ years of experience in ETL development with a focus on PySpark. Strong hands-on experience with Google Cloud Platform (GCP) services, including: BigQuery Cloud Dataflow / Apache Beam Cloud Composer (Apache Airflow) Cloud Storage Proficiency in Python and PySpark for big data processing. Experience with data lake architectures and data warehousing concepts. Knowledge of SQL for data querying and transformation. Experience with CI/CD pipelines for data pipeline automation. Strong debugging and problem-solving skills. Experience with Kafka or Pub/Sub for real-time data processing. Knowledge of Terraform for infrastructure automation on GCP. Experience with containerization (Docker, Kubernetes). Familiarity with DevOps and monitoring tools like Prometheus, Stackdriver, or Datadog. Skills Gcp,Pyspark,Etl Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are looking for a skilled Lead Data Engineer to enhance our dynamic team. In this role, you will focus on designing, developing, and maintaining data integration solutions for our clients. Your leadership will guide a team of engineers in delivering scalable, high-quality, and efficient data integration solutions. This role is perfect for an experienced data integration expert who is passionate about technology and excels in a fast-paced, dynamic setting. Responsibilities Design, develop, and maintain data integration solutions for clients Lead a team of engineers to ensure high-quality, scalable, and efficient delivery of data integration solutions Collaborate with cross-functional teams to comprehend business requirements and design fitting data integration solutions Ensure the security, reliability, and efficiency of data integration solutions Develop and maintain documentation, including technical specifications, data flow diagrams, and data mappings Continuously update knowledge on the latest data integration methods and tools Requirements Bachelor's degree in Computer Science, Information Systems, or a related field 8-13 years of experience in data engineering, data integration, or a related field Proficiency in cloud-native or Spark-based ETL tools such as AWS Glue, Azure Data Factory, or GCP Dataflow Strong knowledge of SQL for querying and manipulating data Competency in Snowflake for cloud data warehousing Familiarity with at least one cloud platform such as AWS, Azure, or GCP Experience in leading a team of engineers on data integration projects Good verbal and written communication skills in English at a B2 level Nice to have Background in ETL using Python Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our organization is in search of a seasoned Senior Data Engineer to enhance our team. In this role, you will focus on projects involving data integration and ETL for cloud environments. Your primary duties will include the design and execution of intricate data solutions, maintaining data accuracy, dependability, and accessibility. Responsibilities Design and execute intricate data solutions for cloud environments Develop ETL processes utilizing SQL, Python, and other applicable technologies Maintain data accuracy, dependability, and accessibility for all stakeholders Work collaboratively with cross-functional teams to comprehend data integration necessities and specifications Create and uphold documentation such as technical specifications, data flow diagrams, and data mappings Optimize data integration processes for enhanced performance and efficiency while ensuring data accuracy and integrity Requirements Bachelor's degree in Computer Science, Electrical Engineering, or related field 5-8 years of experience in data engineering Proficiency in cloud-native or Spark-based ETL tools like AWS Glue, Azure Data Factory, or GCP Dataflow Strong knowledge of SQL for data querying and manipulation Familiarity with Snowflake for data warehousing Background in cloud platforms such as AWS, GCP, or Azure for data storage and processing Excellent problem-solving skills and meticulous attention to detail Good verbal and written communication skills in English at a B2 level Nice to have Background in ETL using Python Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Role: Senior Data Analytics Engineer – GCP Experience: 5+ years Location: Noida & Bhubaneswar We are seeking a highly skilled Senior Data Analytics Specialist with deep expertise in Google Cloud Platform (GCP) tools to join our dynamic team. The ideal candidate will have strong experience in data analytics, business intelligence , and advanced data processing on GCP, specifically with Firebase Analytics , GA4 , and BigQuery . Key Responsibilities: Lead and execute data analytics projects leveraging GCP services, primarily focusing on Firebase Analytics and Google Analytics 4 (GA4). Design, develop, and optimize complex SQL queries and scripts in BigQuery for large-scale data processing. Build and maintain interactive dashboards and data models using Looker or Looker Studio. Collaborate with cross-functional teams to implement custom event tracking and user journey analysis. Work with other GCP components such as Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow to streamline data pipelines. Ensure compliance with data privacy and governance standards, including GDPR and CCPA. Analyze data to uncover trends, insights, and opportunities for business improvement. Communicate findings and recommendations clearly to stakeholders across technical and non-technical teams. Required Qualifications: 5 to 7 years of experience in data analytics, business intelligence, or a related domain. Proven expertise with Firebase Analytics and GA4, including custom event configuration and user behavior tracking. Advanced proficiency in BigQuery with experience in SQL scripting, query optimization, partitioning, and clustering. Hands-on experience with Looker or Looker Studio for dashboard creation and data visualization. Familiarity with additional GCP services such as Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow is preferred. Strong understanding of data privacy laws and governance frameworks like GDPR and CCPA. Excellent analytical, problem-solving, and detail-oriented skills. Strong verbal and written communication skills with the ability to work collaboratively across teams. Preferred Qualifications: Google Cloud certifications such as Professional Data Engineer or Looker Business Analyst. Experience working with A/B testing frameworks and experimentation platforms. Background in product analytics or digital marketing analytics. Exceptional communication and stakeholder management skills. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Location: Trivandrum, Bangalore, Chennai, Kochi Job Description : Responsibilities include : - Design and implement scalable, secure, and cost-effective data architectures using GCP. - Lead the design and development of data pipelines with BigQuery, Dataflow, and Cloud Storage. - Architect and implement data lakes, data warehouses, and real-time data processing solutions on GCP. - Ensure data architecture aligns with business goals, governance, and compliance requirements. - Collaborate with stakeholders to define data strategy and roadmap. - Design and deploy BigQuery solutions for optimized performance and cost efficiency. - Build and maintain ETL/ELT pipelines for large-scale data processing. - Leverage Cloud Pub/Sub, Dataflow, and Cloud Functions for real-time data integration. Requirement : - 10 years of experience in data engineering, with at least 6 years in GCP - Expertise in BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud Storage, and related GCP services. - Strong experience in data warehousing, data lakes, and real-time data pipelines. - Proficiency in SQL, Python, or other data processing languages. - Experience with cloud security, data governance, and compliance frameworks. - Strong problem-solving skills and ability to architect solutions for complex data environments. - Google Cloud Certification (Professional Data Engineer, Professional Cloud Architect) preferred. - Leadership experience and ability to mentor technical teams. - Excellent communication and collaboration skills. - Immediate joiner (15 days NP max) Salary Up to 30LPA (not negotiable) Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

_VOIS Intro About _VOIS: _VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. _VOIS Centre Intro About _VOIS India: In 2009, _VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, _VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Role Related Content (Role specific) Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes using GCP services such as BigQuery, Cloud Data Fusion, Dataflow, Pub/Sub, Cloud Storage, Composer ,Cloud Function, Cloud RUN Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions. Implement data integration solutions to ingest, process, and store large volumes of structured and unstructured data from various sources. Optimize and tune data pipelines for performance, reliability, and cost-efficiency. Ensure data quality and integrity through data validation, cleansing, and transformation processes. Develop and maintain data models, schemas, and metadata to support data analytics and reporting. Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption to data workflows. Stay up-to-date with the latest GCP technologies and best practices, and provide recommendations for continuous improvement. Mentor and guide junior data engineers, fostering a culture of knowledge sharing and collaboration. Proficiency in GCP services such as BigQuery, Cloud Data Fusion, Dataflow, Pub/Sub, Cloud Storage, Composer ,Cloud Function, Cloud RUN. Strong programming skills in Python, PLSQL. Experience with SQL and NoSQL databases. Knowledge of data warehousing concepts and best practices. Familiarity with data integration tools and frameworks. _VOIS Equal Opportunity Employer Commitment India _VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion , Top 10 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Title: Data Architect / Delivery Lead Job Summary: The Data Architect / Delivery Lead will provide technical expertise in the analysis, design, development, rollout, and maintenance of enterprise data models and solutions, utilizing both traditional and emerging technologies such as cloud, Hadoop, NoSQL, and real-time data processing. In addition to technical expertise, the role requires leadership in driving cross-functional teams, ensuring seamless project delivery, and fostering innovation within the team. The candidate must excel in managing data architecture projects while mentoring teams in data engineering practices, including PySpark , automation, and big data integration. Essential Duties Data Architecture Design and Development: Design and develop conceptual, logical, and physical data models for enterprise-scale data lakes and data warehouse solutions, ensuring optimal performance and scalability. Implement real-time and batch data integration solutions using modern tools and technologies such as PySpark, Hadoop, and cloud-based solutions (e.g., AWS, Azure, Google Cloud). Utilize PySpark for distributed data processing, transforming and analyzing large datasets for improved data-driven decision-making. Understand and apply modern data architecture philosophies such as Data Vault, Dimensional Modeling, and Data Lake design for building scalable and sustainable data solutions. Leadership & Delivery Management: Provide leadership in data architecture and engineering projects, ensuring the integration of modern technologies and best practices in data management and transformation. Act as a trusted advisor, collaborating with business users, technical staff, and project managers to define requirements and deliver high-quality data solutions. Lead and mentor a team of data engineers, ensuring the effective application of PySpark for data engineering tasks, and supporting continuous learning and improvement within the team. Manage end-to-end delivery of data projects, including defining timelines, managing resources, and ensuring timely, high-quality delivery while adhering to project methodologies (e.g., Agile, Scrum). Data Movement & Integration: Provide expertise in data integration processes, including batch and real-time data processing using tools such as PySpark, Informatica PowerCenter, SSIS, MuleSoft, and DataStage. Develop and optimize ETL/ELT pipelines, utilizing PySpark for efficient data processing and transformation at scale, particularly for big data environments (e.g., Hadoop ecosystems). Oversee data migration efforts, ensuring high-quality and consistent data delivery while managing data transformation and cleansing processes. Documentation & Communication: Create comprehensive functional and technical documentation, including data integration architecture documentation, data models, data dictionaries, and testing plans. Collaborate with business stakeholders and technical teams to ensure alignment and provide technical guidance on data-related decisions. Prepare and present technical content and architectural decisions to senior management, ensuring clear communication of complex data concepts. Skills and Experience: Data Engineering Skills: Extensive experience in PySpark for large-scale data processing, data transformation, and working with distributed systems. Proficient in modern data processing frameworks and technologies, including Hadoop, Spark, and Flink. Expertise in cloud-based data engineering technologies and platforms such as AWS Glue, Azure Data Factory, or Google Cloud Dataflow. Strong experience with data pipelines, ETL/ELT frameworks, and automation techniques using tools like Airflow, Apache NiFi, or dbt. Expertise in working with big data technologies and frameworks for both structured and unstructured data. Data Architecture and Modeling: 5-10 years of experience in enterprise data modeling, including hands-on experience with ERwin, ER/Studio, PowerDesigner, or similar tools. Strong knowledge of relational databases (e.g., Oracle, SQL Server, Teradata) and NoSQL technologies (e.g., MongoDB, Cassandra). In-depth understanding of data warehousing and data integration best practices, including dimensional modeling and working with OLTP systems and OLAP cubes. Experience with real-time data architectures and cloud-based data lakes, leveraging AWS, Azure, or Google Cloud platforms. Leadership & Delivery Skills: 3-5 years of management experience leading teams of data engineers and architects, ensuring alignment of team goals with organizational objectives. Strong leadership qualities such as innovation, critical thinking, communication, time management, and the ability to collaborate effectively across teams and stakeholders. Proven ability to act as a delivery lead for data projects, driving projects from concept to completion while managing resources, timelines, and deliverables. Ability to mentor and coach team members in both technical and professional growth, fostering a culture of knowledge sharing and continuous improvement. Other Essential Skills: Strong knowledge of SQL, PL/SQL, and proficiency in scripting for data engineering tasks. Ability to translate business requirements into technical solutions, ensuring that the data solutions support business strategies and objectives. Hands-on experience with metadata management, data governance, and master data management (MDM) principles. Familiarity with modern agile methodologies, such as Scrum or Kanban, to ensure iterative and successful project delivery. Preferred Skills & Experience: Cloud Technologies: Experience with cloud data platforms such as AWS Redshift, Google BigQuery, or Azure Synapse for building scalable data solutions. Leadership: Demonstrated ability to build and lead cross-functional teams, drive innovation, and solve complex data problems. Business Consulting: Consulting experience working with clients to deliver tailored data solutions, providing expert guidance on data architecture and data management practices. Data Profiling and Analysis: Hands-on experience with data profiling tools and techniques to assess and improve the quality of enterprise data. Real-Time Data Processing: Experience in real-time data integration and streaming technologies, such as Kafka and Kinesis. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are looking for a talented Frontend Developer to contribute to our core product development. You will be involved in the full software development lifecycle, building scalable and user-friendly applications. Responsibilities Develop and maintain user-facing features using modern web technologies, such as React.js . Collaborate with designers and product managers to translate design mockups and user stories into responsive and engaging web applications. Optimize application performance and ensure cross-browser compatibility. Implement best practices and coding standards to ensure high-quality and maintainable code. Participate in code reviews and provide constructive feedback to improve code quality. Stay up-to-date with the latest industry trends and technologies to drive innovation in frontend development. Requirements Strong proficiency in HTML, CSS, and JavaScript. Experience in building web applications using React.js . Familiarity with RESTful APIs and integrating frontend applications with backend services. Understanding of responsive design principles and mobile-first development. Knowledge of version control systems, such as Git. Ability to work collaboratively in an Agile/Scrum development environment. Excellent problem-solving and communication skills. This job was posted by Sharan Mithran from The DataFlow Group. Show more Show less

Posted 1 month ago

Apply

12.0 - 15.0 years

40 - 45 Lacs

Chennai

Work from Office

Skill & Experience Strategic Planning and Direction Maintain architecture principles, guidelines and standards Project & Program Management Data Warehousing Big Data Data Analytics &; Data Science for solutioning Expert in Big Query, Dataproc, Data Fusion, Dataflow, Bigtable, Fire Store, CloudSQL, Cloud Spanner, Google Cloud Storage, Cloud Composer, Cloud Interconnect, Etc Strong Experience in Big Data- Data Modelling, Design, Architecting & Solutioning Understands programming language like SQL, Python, R-Scala. Good Python skills, - Experience from data visualisation tools such as Google Data Studio or Power BI Knowledge in A/B Testing, Statistics, Google Cloud Platform, Google Big Query, Agile Development, DevOps, Date Engineering, ETL Data Processing Strong Migration experience of production Hadoop Cluster to Google Cloud. Experience in designing & mplementing solution in mentioned areas:Strong Google Cloud Platform Data Components BigQuery, BigTable, CloudSQL, Dataproc, Data Flow, Data Fusion, Etc

Posted 1 month ago

Apply

15.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Job Description We are seeking an experienced Director to lead a team responsible for the development and maintenance of our Connected Vehicle Data. The ideal candidate will have a strong technical background in data and/or software engineering, along with proven leadership and management skills. This role requires the ability to design and code streaming solutions, prioritize team tasks, make timely decisions, and guide the team to deliver high-quality results. The leader must be knowledgeable in data governance, customer consent, and security standards. Responsibilities Responsibilities: Lead and mentor a high-performing team of local and remote data engineers. Prioritize team workload, allocate tasks effectively, and ensure team members have the resources to succeed. Provide technical expertise and guidance to the team. Evaluate and mentor adherence to coding standards, best practices, and architectural guidelines. Oversee the design, development, maintenance, scalability, reliability, and performance of the connected vehicle data platform pipelines and architecture. Contribute to the long-term strategic direction of the Connected Vehicle Data Platform with a focus on enterprise use. Enforce and ensure data quality, data governance, and security standards. Collaborate with Data Program Management to prioritize and implement various business customers’ requests and logic into data assets with optimized design and code development. Collaborate to identify and consolidate common tasks across teams to improve efficiency and reduce redundancy. Communicate decisions effectively and transparently to internal and external customers. Stay updated on industry trends and emerging technologies to inform technical decisions. Qualifications Qualifications Required: Minimum – Bachelor’s Degree in Computer Science, Information Technology, Information Systems, or Data Analytics. Preferred – Master’s Degree in highly technical field – computer science, mathematics, physics. 15+ years of experience in data engineering, cloud platforms, or enterprise-scale data management, with a minimum of 5 years in connected/streaming vehicle platforms. 5+ years' experience leading a software/data engineering team. Expertise in one of the following public cloud environments: Amazon Web Services, Google Cloud Platform, or Microsoft Azure. Expert knowledge and hands on experience in DevOps and SDLC. Monitor and optimize cost and compute for processes in GCP technologies (e.g., BigQuery, Dataflow, Cloud Run, DataProc). Manage and scale serverless applications and clusters, optimizing resource utilization, and implementing monitoring and logging strategies. Expertise in streaming technologies (Kafka, Pub/Sub) and OpenShift, managing high-throughput topics, message ordering, and ensuring data consistency and durability. Why Join Ford? Be at the forefront of Ford’s data and AI transformation , influencing how data drives business decisions. Work in a fast-paced, innovation-driven environment with cutting-edge technology and industry-leading experts . Enjoy a culture that values collaboration, inclusion, and career development . Competitive compensation, benefits, and opportunities for professional growth. Join Us in Shaping the Future of Data at Ford! Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida

On-site

As a Data Engineer with a focus on migrating on-premises databases to Google Cloud SQL, you will play a critical role in solving complex problems and creating value for our business by ensuring reliable, scalable,and efficient data migration processes. You will be responsible for architecting,designing and implementing custom pipelines on the GCP stack to facilitate seamless migration. Required Skills: 5+ years of industry experience in data engineering, business intelligence, or a related field with experience in manipulating, processing, and extracting value from datasets. Expertise in architecting, designing, building, and deploying internal applications to support technology life cycle management, service delivery management, data, and business intelligence. Experience in developing modular code for versatile pipelines or complex ingestion frameworks aimed at loading data into Cloud SQL and managing data migration from multiple on-premises sources. Strong collaboration with analysts and business process owners to translate business requirements into technical solutions. Proficiency in coding with scripting languages (Shell scripting, Python, SQL). Deep understanding and hands-on experience with Google Cloud Platform (GCP) technologies, especially in data migration and warehousing, including Database Migration Service (DMS), Cloud SQL, BigQuery, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage (GCS), IAM, Compute Engine, Cloud Data Fusion, and optionally Dataproc. Adherence to best development practices including technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, and writing clean, modular, self-sustaining code. Familiarity with CI/CD processes using GitHub, Cloud Build, and Google Cloud SDK.

Posted 1 month ago

Apply

0 years

0 Lacs

Noida

On-site

As a Data Engineer, you will design, develop, and support data pipelines and related data products and platforms. Your primary responsibilities include designing and building data extraction, loading, and transformation pipelines across on-prem and cloud platforms. You will perform application impact assessments, requirements reviews, and develop work estimates. Additionally, you will develop test strategies and site reliability engineering measures for data products and solutions, participate in agile development "scrums" and solution reviews, mentor junior Data Engineering Specialists, lead the resolution of critical operations issues, and perform technical data stewardship tasks, including metadata management, security, and privacy by design. Required Skills: Design, develop, and support data pipelines and related data products and platforms. Design and build data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms. Perform application impact assessments, requirements reviews, and develop work estimates. Develop test strategies and site reliability engineering measures for data products and solutions. Participate in agile development "scrums" and solution reviews. Mentor junior Data Engineers. Lead the resolution of critical operations issues, including post-implementation reviews. Perform technical data stewardship tasks, including metadata management, security, and privacy by design. Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies Demonstrate SQL and database proficiency in various data engineering tasks. Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect. Develop Unix scripts to support various data operations. Model data to support business intelligence and analytics initiatives. Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation. Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and Dataproc (good to have).

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related discipline. Minimum of 5 years of practical experience in a data engineering or comparable position. Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, ). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), or GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and other data tools. Solid grasp of data warehousing principles, data modeling techniques, and performance tuning. (e.g. Erwin Data Modeler, MySQL Workbench) · Exceptional problem-solving abilities coupled with a proactive and team-oriented approach. Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our organization is seeking a skilled Senior Data Engineer to become an integral part of our team. In this role, you will focus on projects related to data integration and ETL on cloud-based platforms. You will take charge of creating and executing sophisticated data solutions, ensuring data accuracy, dependability, and accessibility. Responsibilities Create and execute sophisticated data solutions on cloud-based platforms Build ETL processes utilizing SQL, Python, and other applicable technologies Maintain data accuracy, reliability, and accessibility for all stakeholders Work with cross-functional teams to comprehend data integration needs and specifications Produce and sustain documentation, including technical specifications, data flow diagrams, and data mappings Enhance and tune data integration processes for optimal performance and efficiency, guaranteeing data accuracy and integrity Requirements Bachelor’s degree in Computer Science, Electrical Engineering, or a related field 5-8 years of experience in data engineering Proficiency in cloud-native or Spark-based ETL tools such as AWS Glue, Azure Data Factory, or GCP Dataflow Strong knowledge of SQL for data querying and manipulation Qualifications in Snowflake for data warehousing Familiarity with cloud platforms like AWS, GCP, or Azure for data storage and processing Excellent problem-solving skills and attention to detail Good verbal and written communication skills in English at a B2 level Nice to have Background in ETL using Python Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

The Technical Lead / Technical Consultant is a core role and focal point of the project team responsible for the whole technical solution and managing the day-to-day delivery. The role will focus on the technical solution architecture, detailed technical design, coaching of the development/implementation team, and governance of the technical delivery. Technical ownership of the solution from bid inception through implementation to client delivery, followed by after-sales support and best practice advice. Interactions with internal stakeholders and clients to explain technology solutions and a clear understanding of client’s business requirements through which to guide optimal design to meet their needs. Job Description: Must-Have Skills: Database (one or more of MS SQL Server, Oracle, Cloud SQL, Cloud Spanner, etc.) Data Warehouse (one or more of Big Query, SnowFlake, etc.) ETL tool (two or more of Cloud Data Fusion, Dataflow, Dataproc, Pub/Sub, Composer, Cloud Functions, Cloud Run, etc.) Experience in Cloud platforms - GCP Python, PySpark, Project & resource management SVN, JIRA, Automation workflow (Composer, Cloud Scheduler, Apache Airflow, Tidal, Tivoli or similar) Good to have Skills: UNIX shell scripting, SnowFlake, Redshift, Familiar with NoSQL such as MongoDB, etc ETL tool (Databricks / AWS Glue / AWS Lambda / Amazon Kinesis / Amazon Firehose / Azure Data Factory / ADF / DBT / Talend, Informatica, IICS (Informatica cloud) ) Experience in Cloud platforms - AWS / Azure Client-facing skills Key Responsibilities: Ability to design simple to medium data solutions for clients by using cloud architecture using GCP Strong understanding of DW, data mart, data modeling, data structures, databases, and data ingestion and transformation. Working knowledge of ETL as well as database skills Working knowledge of data modeling, data structures, databases, and ETL processes Strong understanding of relational and non-relational databases and when to use them Leadership and communication skills to collaborate with local leadership as well as our global teams Translating technical requirements into ETL/ SQL application code Document project architecture, explain the detailed design to the team, and create low-level to high-level design Create technical documents for ETL and SQL developments using Visio, PowerPoint, and other MS Office package Will need to engage with Project Managers, Business Analysts, and Application DBA to implement ETL Solutions Perform mid to complex-level tasks independently Support Clients, Data Scientists, and Analytical Consultants working on marketing solution Work with cross-functional internal teams and external clients Strong project management and organization skills . Ability to lead 1 – 2 projects of team size 2 – 3 team members. Code management systems which include Code review, deployment, cod Work closely with the QA / Testing team to help identify/implement defect reduction initiatives Work closely with the Architecture team to make sure Architecture standards and principles are followed during development Performing Proof of Concepts on new platforms/validating proposed solutions Work with the team to establish and reinforce disciplined software development, processes, standards, and error recovery procedures are deployed Must understand software development methodologies including waterfall and agile Distribute and manage SQL development Work across the team The candidate must be willing to work during overlapping hours with US-based teams to ensure effective collaboration and communication, typically between [e.g., 6:00 PM to 11:00 PM IST], depending on project needs. Qualifications: Bachelor’s or Master's Degree in Computer Science with >= 7 years of IT experience Location: Bangalore Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

New Delhi, Delhi, India

On-site

The Technical Lead / Technical Consultant is a core role and focal point of the project team responsible for the whole technical solution and managing the day-to-day delivery. The role will focus on the technical solution architecture, detailed technical design, coaching of the development/implementation team, and governance of the technical delivery. Technical ownership of the solution from bid inception through implementation to client delivery, followed by after-sales support and best practice advice. Interactions with internal stakeholders and clients to explain technology solutions and a clear understanding of client’s business requirements through which to guide optimal design to meet their needs. Job Description: Must-Have Skills: Database (one or more of MS SQL Server, Oracle, Cloud SQL, Cloud Spanner, etc.) Data Warehouse (one or more of Big Query, SnowFlake, etc.) ETL tool (two or more of Cloud Data Fusion, Dataflow, Dataproc, Pub/Sub, Composer, Cloud Functions, Cloud Run, etc.) Experience in Cloud platforms - GCP Python, PySpark, Project & resource management SVN, JIRA, Automation workflow (Composer, Cloud Scheduler, Apache Airflow, Tidal, Tivoli or similar) Good to have Skills: UNIX shell scripting, SnowFlake, Redshift, Familiar with NoSQL such as MongoDB, etc ETL tool (Databricks / AWS Glue / AWS Lambda / Amazon Kinesis / Amazon Firehose / Azure Data Factory / ADF / DBT / Talend, Informatica, IICS (Informatica cloud) ) Experience in Cloud platforms - AWS / Azure Client-facing skills Key Responsibilities: Ability to design simple to medium data solutions for clients by using cloud architecture using GCP Strong understanding of DW, data mart, data modeling, data structures, databases, and data ingestion and transformation. Working knowledge of ETL as well as database skills Working knowledge of data modeling, data structures, databases, and ETL processes Strong understanding of relational and non-relational databases and when to use them Leadership and communication skills to collaborate with local leadership as well as our global teams Translating technical requirements into ETL/ SQL application code Document project architecture, explain the detailed design to the team, and create low-level to high-level design Create technical documents for ETL and SQL developments using Visio, PowerPoint, and other MS Office package Will need to engage with Project Managers, Business Analysts, and Application DBA to implement ETL Solutions Perform mid to complex-level tasks independently Support Clients, Data Scientists, and Analytical Consultants working on marketing solution Work with cross-functional internal teams and external clients Strong project management and organization skills . Ability to lead 1 – 2 projects of team size 2 – 3 team members. Code management systems which include Code review, deployment, cod Work closely with the QA / Testing team to help identify/implement defect reduction initiatives Work closely with the Architecture team to make sure Architecture standards and principles are followed during development Performing Proof of Concepts on new platforms/validating proposed solutions Work with the team to establish and reinforce disciplined software development, processes, standards, and error recovery procedures are deployed Must understand software development methodologies including waterfall and agile Distribute and manage SQL development Work across the team The candidate must be willing to work during overlapping hours with US-based teams to ensure effective collaboration and communication, typically between [e.g., 6:00 PM to 11:00 PM IST], depending on project needs. Qualifications: Bachelor’s or Master's Degree in Computer Science with >= 7 years of IT experience Location: Bangalore Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

India

Remote

Job Post :- AI/ML Engineer Experience - 4+ years Location - Remote Key Responsibilities: Design, build, and maintain ML infrastructure on GCP using tools such as Vertex AI, GKE, Dataflow, BigQuery, and Cloud Functions. Develop and automate ML pipelines for model training, validation, deployment, and monitoring using tools like Kubeflow Pipelines, TFX, or Vertex AI Pipelines. Work with Data Scientists to productionize ML models and support experimentation workflows. Implement model monitoring and alerting for drift, performance degradation, and data quality issues. Manage and scale containerized ML workloads using Kubernetes (GKE) and Docker. Set up CI/CD workflows for ML using tools like Cloud Build, Bitbucket, Jenkins, or similar. Ensure proper security, versioning, and compliance across the ML lifecycle. Maintain documentation, artifacts, and reusable templates for reproducibility and auditability. Having GCP MLE Certification is Plus Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

Remote

What would a typical day at your work be like? You will lead and manage the delivery of projects and be responsible for the delivery of project and team goals. Build & support data ingestion and processing pipelines. This will entail extract, load and transform of data from a wide variety of sources using latest data frameworks and technologies. Design, build, test, and maintain machine learning infrastructure and frameworks to empower data scientists to rapidly iterate on model development. Own and lead client engagement and communication on technical projects. Define project scopes and track project progress and delivery. Plan and execute project architecture and allocate work to team. Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume. Partner with software engineering teams to drive completion of multi-functional projects. What Do We Expect? Minimum 6 years of overall experience in data engineering and 2+ years leading a team as team lead and doing project management. Experience working with a global team and remote clients. Hands on experience in building data pipelines on various infrastructures. Knowledge of statistical and machine learning techniques. Hands on experience in integrating machine learning in data pipelines. Ability to work hands-on with the data engineers in the team in design and development of the solution using the relevant big data technologies and data warehouse concepts Strong knowledge of advanced SQL, data warehousing concepts, DataMart designing. Have strong experience in modern data platform components such as Spark, Python, etc. Experience with setting up and maintaining Data warehouse (Google BigQuery, Redshift, Snowflake) and Data Lakes (GCS, AWS S3 etc.) for an organization. Experience in building data pipeline with AWS Glue, Azure Data Factory and Google Dataflow. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra / MongoDB. Strong problem solving and communication skills Show more Show less

Posted 2 months ago

Apply

0.0 - 5.0 years

0 Lacs

Gurugram, Haryana

On-site

Senior Data Engineer(GCP, Python) Gurgaon, India Information Technology 314204 Job Description About The Role: Grade Level (for internal use): 10 S&P Global Mobility The Role: Senior Data Engineer Department overview Automotive Insights at S&P Mobility, leverages technology and data science to provide unique insights, forecasts and advisory services spanning every major market and the entire automotive value chain—from product planning to marketing, sales and the aftermarket. We provide the most comprehensive data spanning the entire automotive lifecycle—past, present and future. With over 100 years of history, unmatched credentials, and the largest base of customers than any other provider, we are the industry benchmark for clients around the world, helping them make informed decisions to capitalize on opportunity and avoid risk. Our solutions are used by nearly every major OEM, 90% of the top 100 tier one suppliers, media agencies, governments, insurance companies, and financial stakeholders to provide actionable insights that enable better decisions and better results. Position summary S&P Global is seeking an experienced and driven Senior data Engineer who is passionate about delivering high-value, high-impact solutions to the world’s most demanding, high-profile clients. The ideal candidate must have at least 5 years of experience in developing and deploying data pipelines on Google Cloud Platform (GCP). They should be passionate about building high-quality, reusable pipelines using cutting-edge technologies. This role involves designing, building, and maintaining scalable data pipelines, optimizing workflows, and ensuring data integrity across multiple systems. The candidate will collaborate with data scientists, analysts, and software engineers to develop robust and efficient data solutions. Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines. Optimize and automate data ingestion, transformation, and storage processes. Work with structured and unstructured data sources, ensuring data quality and consistency. Develop and maintain data models, warehouses, and databases. Collaborate with cross-functional teams to support data-driven decision-making. Ensure data security, privacy, and compliance with industry standards. Troubleshoot and resolve data-related issues in a timely manner. Monitor and improve system performance, reliability, and scalability. Stay up-to-date with emerging data technologies and recommend improvements to our data architecture and engineering practices. What you will need: Strong programming skills using python. 5+ years of experience in data engineering, ETL development, or a related role. Proficiency in SQL and experience with relational (PostgreSQL, MySQL, etc.) and NoSQL (DynamoDB, MongoDB etc…) databases. Proficiency building data pipelines in Google cloud platform(GCP) using services like DataFlow, Cloud Batch, BigQuery, BigTable, Cloud functions, Cloud Workflows, Cloud Composer etc.. Strong understanding of data modeling, data warehousing, and data governance principles. Should be capable of mentoring junior data engineers and assisting them with technical challenges. Familiarity with orchestration tools like Apache Airflow. Familiarity with containerization and orchestration. Experience with version control systems (Git) and CI/CD pipelines. Excellent problem-solving skills and ability to work in a fast-paced environment. Excellent communication skills. Hands-on experience with snowflake is a plus. Experience with big data technologies is a plus. Experience in AWS is a plus. Should be able to convert business queries into technical documentation. Education and Experience Bachelor’s degree in Computer Science, Information Systems, Information Technology, or a similar major or Certified Development Program 5+ years of experience building data pipelines using python & GCP (Google Cloud platform). About Company Statement: S&P Global delivers essential intelligence that powers decision making. We provide the world’s leading organizations with the right data, connected technologies and expertise they need to move ahead. As part of our team, you’ll help solve complex challenges that equip businesses, governments and individuals with the knowledge to adapt to a changing economic landscape. S&P Global Mobility turns invaluable insights captured from automotive data to help our clients understand today’s market, reach more customers, and shape the future of automotive mobility. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 314204 Posted On: 2025-05-30 Location: Gurgaon, Haryana, India

Posted 2 months ago

Apply

4.0 years

0 Lacs

India

Remote

Job Description KLDiscovery, a leading global provider of electronic discovery, information governance and data recovery services, is currently seeking a Senior Software Engineer for an exciting new opportunity. This person will develop core parts of our eDiscovery offerings, including software development, testing, and systems automation. They will collaborate with team members, product owners, designers, architects, and other development teams to research relevant technologies and build innovative solutions that enhance our offerings and exceed customer needs. If you like working in a creative, technology-driven, high energy, collaborative, casual environment, and you have strong software development abilities, this is the opportunity for you! Hybrid or remote, work from home opportunity. Responsibilities Create, Validate and Review program code per specifications. Develop automated unit and API tests. Support bug fixes and implement enhancements to applications in Production. Create, design and review SW documentation. Utilize, communicate, and enforce coding standards. Provide Technical Support to applications in Production within defined SLA. Adhere to development processes and workflows. Assist and mentor team demonstrating technical excellence. Detects problems and areas that need improvement early and raises issues. Qualifications Fluent English (C1) At least 4 years of commercial, hands-on software development experience in C#/.NET and C++ Experience with ASP.NET Core Blazor Experience with desktop applications (Winforms preferred) Experience with background jobs and workers (e.g. Hangfire) Experience with Angular is a plus Creating dataflow/sequence/C4 diagrams Good understanding of at least one of architectural/design patterns: MVC/MVP/MVVM/Clean/Screaming/Hexagonal architectures .NET memory model and performance optimizations solutions Writing functional tests. Writing structure tests. Understanding modularity and vertical slices. Data privacy and securing desktop apps. Ability to design functionalities based on requirements Experience with Entity Framework Core Our Cultural Values Entrepreneurs At Heart, We Are a Customer First Team Sharing One Goal And One Vision. We Seek Team Members Who Are Humble - No one is above another; we all work together to meet our clients’ needs and we acknowledge our own weaknesses Hungry - We all are driven internally to be successful and to continually expand our contribution and impact Smart - We use emotional intelligence when working with one another and with clients Our culture shapes our actions, our products, and the relationships we forge with our customers. Who We Are KLDiscovery provides technology-enabled services and software to help law firms, corporations, government agencies and consumers solve complex data challenges. The company, with offices in 26 locations across 17 countries, is a global leader in delivering best-in-class eDiscovery, information governance and data recovery solutions to support the litigation, regulatory compliance, internal investigation and data recovery and management needs of our clients. Serving clients for over 30 years, KLDiscovery offers data collection and forensic investigation, early case assessment, electronic discovery and data processing, application software and data hosting for web-based document reviews, and managed document review services. In addition, through its global Ontrack Data Recovery business, KLDiscovery delivers world-class data recovery, email extraction and restoration, data destruction and tape management. KLDiscovery has been recognized as one of the fastest growing companies in North America by both Inc. Magazine (Inc. 5000) and Deloitte (Deloitte’s Technology Fast 500) and CEO Chris Weiler has been honored as a past Ernst & Young Entrepreneur of the Year™. Additionally, KLDiscovery is an Orange-level Relativity Best in Service Partner, a Relativity Premium Hosting Partner and maintains ISO/IEC 27001 Certified data centers. KLDiscovery is an Equal Opportunity Employer. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

What you’ll do: Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need : Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart: Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Cloud Certification strongly preferred Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for an experienced Integration Technical Lead with over 10 years of in-depth experience in Oracle Fusion Middleware technologies such as SOA Suite, Oracle Service Bus (OSB), and Oracle Data Integrator (ODI). Candidate will be responsible for leading integration initiatives including custom development, platform customization, and day-to-day operational support. A strong interest in Google Cloud Platform (GCP) is highly desirable, with clear opportunities for training and skill development. ShyftLabs is a growing data product company founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries by focusing on creating value through innovation. Job Responsibilities: 1. Integration Leadership & Development: Lead end-to-end integration design and development across on-premise and cloud systems using Oracle SOA, OSB, and ODI. Drive new integration projects, from requirements gathering through to deployment and support. Develop, customize, and maintain reusable integration components and templates. Translate complex business processes into scalable, secure, and performant integration solutions. 2. Platform Customization & Optimization: Customize Oracle Fusion middleware components to meet specific business needs and performance objectives. Evaluate existing integrations and enhance them for greater efficiency and lower latency. Implement best practices in integration design, error handling, and performance tuning. 3. Operational Excellence & Support: Own the operational stability of integration platforms including monitoring, incident resolution, and root cause analysis. Manage daily operations such as deployments, patches, backups, and performance reviews. Collaborate with IT support teams to maintain integration SLAs, uptime and reliability. 4. Cloud Integration & GCP Adoption: Contribute to the design of hybrid and cloud-native integration architectures using GCP. Learn and eventually implement integration patterns using tools like Apigee, Pub/Sub, Cloud Functions, and Dataflow. Participate in GCP migration initiative for legacy integration assets Basic Qualifications: 10+ years of hands-on experience with Oracle SOA Suite, OSB, and ODI in enterprise environments. Expertise in web services (REST/SOAP), XML, XSD, XSLT, XPath, and service orchestration. Strong skills in platform customization, new integration development, integration monitoring, alerting, and troubleshooting processes and long-term system maintenance. Experience with performance optimization, fault tolerance, and secure integrations. Excellent communication and team leadership skills. Preferred Qualifications: Exposure to Google Cloud Platform (GCP) or strong interest and ability to learn. Familiarity with GCP services for integration (Pub/Sub, Cloud Store/Functions). Understanding of containerized deployments using Docker and Kubernetes. Experience with DevOps tools and CI/CD pipelines for integration delivery. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources. Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

India

On-site

Description The Position We are seeking a seasoned engineer with a passion for changing the way millions of people save energy. You’ll work within the Engineering team to build and improve our platforms to deliver flexible and creative solutions to our utility partners and end users and help us achieve our ambitious goals for our business and the planet. We are seeking a skilled and passionate Data Engineer - Business Intelligence with expertise in Data Engineering and BI Reporting to join our development team. As a Data Engineer, you will play a crucial role developing different components, harnessing the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, data processing and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. You will also work on creating BI reports as well as development of a Business Intelligence platform that will enable users to create reports and dashboards based on their requirements. You will coordinate with the rest of the team working on different layers of the infrastructure. Therefore, a commitment to collaborative problem solving, sophisticated design, and quality product is important. You will own the development and its quality independently and be responsible for high quality deliverables. And you will work with a great team with excellent benefits. Responsibilities & Skills You should: Be excited to work with talented, committed people in a fast-paced environment. Have a proven experience as a Data Engineer with a focus on BI reporting.. Be designing, building, and maintaining high performance solutions with reusable, and reliable code. Use a rigorous approach for product improvement and customer satisfaction. Love developing great software as a seasoned product engineer. Be ready, able, and willing to jump onto a call with stakeholders to help solve problems. Be able to deliver against several initiatives simultaneously. Have a strong eye for detail and quality of code. Have an agile mindset. Have strong problem-solving skills and attention to detail. Required Skills (Data Engineer) You ideally have 2+ or more years of professional experience. Design, build, and maintain scalable data pipelines and ETL processes to support business analytics and reporting needs. Strong Experience with SQL for querying and transforming large datasets, and optimizing query performance in relational databases. Proficiency in Python for building and automating data pipelines, ETL processes, and data integration workflows. Familiarity with big data frameworks such as Apache Spark or PySpark for distributed data processing. Strong Understanding of data modeling principles for building scalable and efficient data architectures (e.g., star schema, snowflake schema). Good to have experience with Databricks for managing and processing large datasets, implementing Delta Lake, and leveraging its collaborative environment. Knowledge of Google Cloud Platform (GCP) services like BigQuery, Dataflow, Pub/Sub, and Cloud Storage for end-to-end data engineering solutions. Familiarity with version control systems such as Git and CI/CD pipelines for managing code and deploying workflows. Awareness of data governance and security best practices, including access control, data masking, and compliance with industry standards. Exposure to monitoring and logging tools like Datadog, Cloud Logging, or ELK stack for maintaining pipeline reliability. Ability to understand business requirements and translate them into technical requirements. Inclination to design solutions for complex data problems. Ability to deliver against several initiatives simultaneously as a multiplier. Demonstrable experience with writing unit and functional tests. Required Skills (BI Reporting) Strong experience in developing Business Intelligence reports and dashboards via tools such as Tableau, PowerBI, Sigma etc. Ability to analyse and deeply understand the data, relate it to the business application and derive meaningful insights from the data. Required The following experiences are not required, but you'll stand out from other applicants if you have any of the following, in our order of importance: You are an experienced developer - a minimum of 2+ years of professional experience. Work experience & strong proficiency in Python, SQL and BI Reporting and its associated frameworks (like Flask, FastAPI etc.). Experience with cloud infrastructure like AWS/GCP or other cloud service provider experience CI/CD experience You are a Git guru and revel in collaborative workflows You work on the command line confidently and are familiar with all the goodies that the linux toolkit can provide Familiarity with Apache Spark and PySpark. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Uplight provides equal employment opportunities to all employees and applicants and prohibits discrimination and harassment of any type without regard to race (including hair texture and hairstyles), color, religion (including head coverings), age, sex, national origin, caste, disability status, genetics, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. L&A Business Consultant Working as part of the Consulting team, you will take part in engagements related to a wide range of topics. Some examples of domains in which you will support our clients include the following: Proficient in Individual and Group Life Insurance concepts, different type of Annuity products etc. Proficient in different insurance plans - Qualified/Non-Qualified Plans, IRA, Roth IRA, CRA, SEP Solid knowledge on the Policy Life cycle Illustrations/Quote/Rating New Business & Underwriting Policy Servicing and Administration Billing & Payment Claims Processing Disbursement (Systematic withdrawals, RMD, Surrenders) Regulatory Changes & Taxation Understanding of business rules of Pay-out Understanding on upstream and downstream interfaces for policy lifecycle Experience in DXC Platforms – Vantage, wmA, nbA, CSA, Cyber-life, Life70, Life Asia, PerformancePlus Consulting Skills – Experience in creating business process map for future state architecture, creating WBS for overall conversion strategy, requirement refinement process in multi-vendor engagement. Requirements Gathering, Elicitation –writing BRDs, FSDs. Conducting JAD sessions and Workshops to capture requirements and working close with Product Owner. Work with the client to define the most optimal future state operational process and related product configuration. Define scope by providing innovative solutions and challenging all new client requirements and change requests but simultaneously ensuring that client gets the required business value. Elaborate and deliver clearly defined requirement documents with relevant dataflow and process flow diagrams. Work closely with product design development team to analyse and extract functional enhancements. Provide product consultancy and assist the client with acceptance criteria gathering and support throughout the project life cycle. Technology Skills - Experienced in data migration projects, ensuring seamless transfer of data between systems while maintaining data integrity and security. Skilled in data analytics, utilizing various tools and techniques to extract insights and drive informed decision-making. Strong understanding of data governance principles and best practices, ensuring data quality and compliance. Collaborative team player, able to work closely with stakeholders and technical teams to define requirements and implement effective solutions. Industry certifications (AAPA/LOMA) will be added advantage. Experience on these COTS product is preferrable. FAST ALIP OIPA wmA We expect you to work effectively as a team member and build good relationships with the client. You will have the opportunity to expand your domain knowledge and skills and will be able to collaborate frequently with other EY professionals with a wide variety of expertise. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. L&A Business Consultant Working as part of the Consulting team, you will take part in engagements related to a wide range of topics. Some examples of domains in which you will support our clients include the following: Proficient in Individual and Group Life Insurance concepts, different type of Annuity products etc. Proficient in different insurance plans - Qualified/Non-Qualified Plans, IRA, Roth IRA, CRA, SEP Solid knowledge on the Policy Life cycle Illustrations/Quote/Rating New Business & Underwriting Policy Servicing and Administration Billing & Payment Claims Processing Disbursement (Systematic withdrawals, RMD, Surrenders) Regulatory Changes & Taxation Understanding of business rules of Pay-out Understanding on upstream and downstream interfaces for policy lifecycle Experience in DXC Platforms – Vantage, wmA, nbA, CSA, Cyber-life, Life70, Life Asia, PerformancePlus Consulting Skills – Experience in creating business process map for future state architecture, creating WBS for overall conversion strategy, requirement refinement process in multi-vendor engagement. Requirements Gathering, Elicitation –writing BRDs, FSDs. Conducting JAD sessions and Workshops to capture requirements and working close with Product Owner. Work with the client to define the most optimal future state operational process and related product configuration. Define scope by providing innovative solutions and challenging all new client requirements and change requests but simultaneously ensuring that client gets the required business value. Elaborate and deliver clearly defined requirement documents with relevant dataflow and process flow diagrams. Work closely with product design development team to analyse and extract functional enhancements. Provide product consultancy and assist the client with acceptance criteria gathering and support throughout the project life cycle. Technology Skills - Experienced in data migration projects, ensuring seamless transfer of data between systems while maintaining data integrity and security. Skilled in data analytics, utilizing various tools and techniques to extract insights and drive informed decision-making. Strong understanding of data governance principles and best practices, ensuring data quality and compliance. Collaborative team player, able to work closely with stakeholders and technical teams to define requirements and implement effective solutions. Industry certifications (AAPA/LOMA) will be added advantage. Experience on these COTS product is preferrable. FAST ALIP OIPA wmA We expect you to work effectively as a team member and build good relationships with the client. You will have the opportunity to expand your domain knowledge and skills and will be able to collaborate frequently with other EY professionals with a wide variety of expertise. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies