Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Base SAS Certified professional.Develop, implement, and optimize analytical models using SAS and SQL.Strong Knowledge within SAS DI,SAS EG, SAS BI tools.Analyze large datasets to derive actionable insights and support business decision-making.Design, implement, and maintain ETL workflows to extract, transform, and load data efficiently.Develop advanced SAS programs using SAS Macros for automation and data processing.Troubleshoot and optimize SAS code for performance improvements.Work on data warehousing projects to enable efficient data storage and retrieval.Basic Unix Scripts related Knowledge.JL5B- 8+ Years of ExperienceAdvanced SAS Certified with 5+Worked on Min 2SAS Migration Projects execution.Strong Knowledge within SAS DI,SAS EG, SAS BI tools.SAS Migration Projects to SAS Scripts to Python, Pyspark, Databricks, ADF and Snowflake etc.Good Knowledge within SAS VIYA 3.4,4.0 platforms.Good Knowledge on SAS- SMC, LSF and other Schedulers Basic Knowledge on SAS Admin information, Basic SAS Grid level Knowledge.Good Unix commands, Scripts related Knowledge.Handle case studies or complex data scenarios, ensuring data quality and integrity.Develop advanced SAS programs using SAS Macros for automation and data processing.Troubleshoot and optimize SAS code for performance improvements.Collaborate with the data engineering team to build and manage robust data pipelines.Work on data warehousing projects to enable efficient data storage and retrieval.Present findings and insights clearly to both technical and non-technical stakeholders.Work closely with teams across departments to gather requirements and deliver solutions Technical and Professional : Min 5+ years in the analytics domain, with a strong portfolio of relevant projects.Proficiency in SAS, SAS Macros, and SQL.Hands-on experience in ETL processes and tools.Knowledge of data engineering concepts and data warehousing best practices. Preferred Skills: Technology-Reporting Analytics & Visualization-SAS Enterprise Guide
Posted 2 weeks ago
7.0 - 12.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Degree, Post graduate in Computer Science or related field (or equivalent industry experience) Minimum 4+ years of development and design experience in Informatica Big Data Management Extensive knowledge on Oozie scheduling, HQL, Hive, HDFS (including usage of storage controllers) and data partitioning Extensive experience working with SQL and NoSQL databases Linux OS configuration and use, including shell scripting. Good hands on experience with design patterns and their implementation. Well versed with Agile, DevOps and CI/CD principles (GitHub, Jenkins etc.), and actively involved in solving, troubleshooting issues in distributed services ecosystem Familiar with Distributed services resiliency and monitoring in a production environment. Experience in designing, building, testing and implementing security systems including identifying security design gaps in existing and proposed architectures and recommend changes or enhancements. Responsible for adhering to established policies, following best practices, developing and possessing an in-depth understanding of exploits and vulnerabilities, resolving issues by taking the appropriate corrective action. Knowledge on security controls designing Source and Data Transfers including CRON, ETLs, and JDBC ODBC scripts. Understand basics of Networking including DNS, Proxy, ACL, Policy and troubleshooting High level knowledge of compliance and regulatory requirements of data including but not limited to encryption, anonymization, data integrity, policy control features in large scale infrastructures Understand data sensitivity in terms of logging, events and in memory data storage such as no card numbers or personally identifiable data in logs. Implements wrapper solutions for new/existing components with no/minimal security controls to ensure compliance to bank standards. Experience in Agile methodology. Ensure quality of technical and application architecture and design of systems across the organization. Effectively research and benchmark technology against other best in class technologies. Experience in Banking, Financial and Fintech experience in an enterprise environment preferred Able to influence multiple teams on technical considerations, increasing their productivity and effectiveness, by sharing deep knowledge and experience. Self-motivator and self-starter, Ability to own and drive things without supervision and works collaboratively with the teams across the organization. Have excellent soft and interpersonal skills to interact and present the ideas to team. The engineer shouldve good listening skills and speaks clearly in front of team, stakeholders and management. The engineer should always carry positive attitude towards work and establishes effective team relations and builds a climate of trust within the team. Should be enthusiastic and passionate and creates a motivating environment for the team.
Posted 2 weeks ago
10.0 - 15.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Experience: 8+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data PipelinesDesign, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL DevelopmentDevelop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow AutomationBuild and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data IntegrationWork with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and ScalingOptimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 2 weeks ago
9.0 - 14.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Develop and maintain a metadata driven generic ETL framework for automating ETL code Design, build, and optimize ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS . Ingest data from a variety of structured and unstructured sources (APIs, RDBMS, flat files, streaming). Develop and maintain robust data pipelines for batch and streaming data using Delta Lake and Spark Structured Streaming. Implement data quality checks, validations, and logging mechanisms. Optimize pipeline performance, cost, and reliability. Collaborate with data analysts, BI, and business teams to deliver fit for purpose datasets. Support data modeling efforts (star, snowflake schemas) de norm tables approach and assist with data warehousing initiatives. Work with orchestration tools Databricks Workflows to schedule and monitor pipelines. Follow best practices for version control, CI/CD, and collaborative development Skills Hands-on experience in ETL/Data Engineering roles. Strong expertise in Databricks (PySpark, SQL, Delta Lake), Databricks Data Engineer Certification preferred Experience with Spark optimization, partitioning, caching, and handling large-scale datasets. Proficiency in SQL and scripting in Python or Scala. Solid understanding of data lakehouse/medallion architectures and modern data platforms. Experience working with cloud storage systems like AWS S3 Familiarity with DevOps practices Git, CI/CD, Terraform, etc. Strong debugging, troubleshooting, and performance-tuning skills.
Posted 2 weeks ago
7.0 - 12.0 years
4 - 8 Lacs
Bengaluru
Work from Office
About the Role We are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data quality and availability across the organization. This role requires a strong background in big data ecosystems, cloud-native tools, and advanced data processing techniques. The ideal candidate has hands-on experience with data ingestion, transformation, and optimization on the Cloudera Data Platform, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights. Responsibilities Data Pipeline DevelopmentDesign, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data IngestionImplement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and ProcessingUse PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance OptimizationConduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and ValidationImplement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and OrchestrationAutomate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Education and Experience Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills PySparkAdvanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data PlatformStrong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data WarehousingKnowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data TechnologiesFamiliarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and SchedulingExperience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and AutomationStrong scripting skills in Linux.
Posted 2 weeks ago
9.0 - 14.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Kafka Data Engineer Data Engineer to build and manage data pipelines that support batch and streaming data solutions. The role requires expertise in creating seamless data flows across platforms like Data Lake/Lakehouse in Cloudera, Azure Databricks, Kafka for both batch and stream data pipelines etc. Responsibilities Strong experience in develop, test, and maintain data pipelines (batch & stream) using Cloudera, Spark, Kafka and Azure services like ADF, Cosmos DB, Databricks, NoSQL DB/ Mongo DB etc. Strong programming skills in spark, python or scala & SQL. Optimize data pipelines to improve speed, performance, and reliability, ensuring that data is available for data consumers as required. Create ETL pipelines for downstream consumers by transform data as per business logic. Work closely with Data Architects and Data Analysts to align data solutions with business needs and ensure the accuracy and accessibility of data. Implement data validation checks and error handling processes to maintain high data quality and consistency across data pipelines. Strong analytical and problem solving skills, with a focus on optimizing data flows and addressing impacts in the data pipeline. Qualifications 8+ years of IT experience with at least 5+ years in data engineering and cloud-based data platforms. Strong experience with Cloudera/any Data Lake, Confluent/Apache Kafka, and Azure Data Services (ADF, Databricks, Cosmos DB). Deep knowledge of NoSQL databases (Cosmos DB, MongoDB) and data modeling for performance and scalability. Proven expertise in designing and implementing batch and streaming data pipelines using Databricks, Spark, or Kafka. Experience in creating scalable, reliable, and high-performance data solutions with robust data governance policies. Strong collaboration skills to work with stakeholders, mentor junior Data Engineers, and translate business needs into actionable solutions. Bachelors or masters degree in computer science, IT, or a related field.
Posted 2 weeks ago
8.0 - 13.0 years
8 - 12 Lacs
Pune
Work from Office
Must have 5+ years of experience in data engineer role Strong background in Relational Databases ( Microsoft SQL) and strong ETL (Microsoft SSIS) experience. Strong hand on T-SQL programming language Ability to develop reports using Microsoft Reporting Services (SSRS) Familiarity with C# is preferred Strong Analytical and Logical Reasoning skills Should be able to build processes that support data transformation, workload management, data structures, dependency and metadata Should be able to develop data models to answer questions for the business users Should be good at performing root cause analysis on internal/external data and processes to answer specific business data questions. Excellent communication skills to work with business users independently.
Posted 2 weeks ago
8.0 - 13.0 years
5 - 10 Lacs
Hyderabad
Work from Office
6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.
Posted 2 weeks ago
8.0 - 13.0 years
5 - 9 Lacs
Hyderabad
Work from Office
1. Data Engineer Azure Data Services 2. Data Modelling NO SQL and SQL 3. Good understanding of Spark, Spark stream 4. Hands on with Python / Pandas / Data Factory / Cosmos DB / Data Bricks / Event Hubs / Stream Analytics 5. Knowledge of medallion architecture, data vaults, data marts etc. 6. Preferably Azure Data associate exam certified.
Posted 2 weeks ago
8.0 - 13.0 years
5 - 10 Lacs
Mumbai
Work from Office
Sr Developer with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.
Posted 2 weeks ago
8.0 - 13.0 years
5 - 10 Lacs
Hyderabad
Work from Office
Sr Developer with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.
Posted 2 weeks ago
8.0 - 13.0 years
5 - 9 Lacs
Pune
Work from Office
Responsibilities / Qualifications: Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Glue Data Catalog, Lake formation, Apache Airflow, Lambda, etc Experience with development of data governance framework including the management of data, operating model, data policies and standards. Experience with orchestration of workflows in an enterprise environment. Working experience with Agile Methodology Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 2 weeks ago
8.0 - 13.0 years
5 - 10 Lacs
Bengaluru
Work from Office
6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.
Posted 2 weeks ago
8.0 - 13.0 years
5 - 10 Lacs
Hyderabad
Work from Office
Sr Devloper with special emphasis and experience of 8 to 10 years on Pyspark and Python along with ETL Tools ( Talend / Ab initio / informatica / Similar) . Also have good exposure to ETL tools to understand the flow and rewrite them into Python and Pyspark and executing the test plans.8-10 years of experience in designing and developing Pyspark applications and ETL Jobs using ETL Tools. 5+ years of sound knowledge on Pyspark to implement ETL logics. Strong understanding of frontend technologies such as HTML, CSS, React & JavaScript. Proficiency in data modeling and design, including PL/SQL development Creating test plans to understand current ETL flow and rewriting them to Pyspark. Providing ongoing support and maintenance for ETL applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews, Continuous Integration.
Posted 2 weeks ago
8.0 - 13.0 years
8 - 12 Lacs
Hyderabad
Work from Office
10+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements.
Posted 2 weeks ago
6.0 - 11.0 years
8 - 12 Lacs
Gurugram
Work from Office
6-8 years of experience with at least 4 years of experience in test automation. Prior automation experience is a must. Familiarity with Python for test automation and scripting. Minimum 6 years of experience in QA/testing, with a focus on payments, ETL and data engineering projects. Excellent communication skills to work effectively with cross-functional teams. Good to Have: 6-8 years of experience in Payments/SWIFT/ISO/ ETL testing. Strong SQL skills for querying, comparing, and validating large datasets. Experience in testing ETL pipelines and data transformations- Prior experience testing ETL migrations from legacy systems like SAS DI to modern platforms is a plus. Hands-on experience with cloud platforms, particularly AWS services like S3, EMR, and PostgreSQL on AWS. Knowledge of SAS DI is highly desirable for understanding legacy pipelines.
Posted 2 weeks ago
8.0 - 13.0 years
37 - 40 Lacs
Pune
Work from Office
: Job TitleSenior Data Engineer Corporate TitleAVP LocationPune, India Role Description Technology Management is responsible for improving the technological aspects of operations to reduce infrastructure costs, improve functional performance and help deliver Divisional business goals. To achieve this, organization needs to be engineering focused. Looking for technologists who demonstrate a passion to build the right thing in the right way. We are looking for an experienced SQL developer to help build our data integration layer utilizing the latest tools and technologies. In this critical role you will become part of a motivated and talented team operating within a creative environment. You should have a passion for writing and designing complex data models, stored procedures, and tuning queries, that push the boundaries of what is possible and exists within the bank today. What well offer you : 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Part of a global team, forging strong relationships with geographically diverse teams / colleagues and businesses to formulate and execute technology strategy Production of code-based assets within the context of agile delivery (helping define and meet epics, stories, acceptance criteria) Responsible for the design, development and QA of those assets and outputs Perform review of component integration testing, unit testing and code review Write high performance, highly resilient queries in Oracle PL/SQL, and Microsoft SQL Server T--SQL Experience working with agile/continuous integration/test technologies such as git/stash, Jenkins, Artifactory Work in a fast-paced, high-energy team environment Developing scalable applications using ETL technology like Stream Sets, Pentaho, Informatica etc. Design and develop dashboards for business and reporting using the preferred BI tools eg Power BI or QlikView Thorough understanding of relational databases and knowledge of different data models Well versed with SQL and able to understand and debug database objects like Stored Procedures, Functions etc. Managing a Data Modelling tool like Power Designer, MySQL Workbench etc. Agile (scrum) based delivery practices, test driven development, test automation, continuous delivery Passion for learning new technologies Your skills and experience Education / Certification:Bachelors degree from an accredited college or university with a concentration in Science, Engineering or an IT-related discipline (or equivalent) Fluent English (written/verbal) Excellent communication and influencing skills Ability to work in fast paced environment Passion about sharing knowledge and best practice Ability to work in virtual teams and in matrixed organizations How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm
Posted 2 weeks ago
4.0 - 9.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Minimum 6 years of hands-on experience in data engineering or big data development roles. Strong programming skills in Python and experience with Apache Spark (PySpark preferred). Proficient in writing and optimizing complex SQL queries. Hands-on experience with Apache Airflow for orchestration of data workflows. Deep understanding and practical experience with AWS services: Data Storage & ProcessingS3, Glue, EMR, Athena Compute & ExecutionLambda, Step Functions DatabasesRDS, DynamoDB MonitoringCloudWatch Experience with distributed data processing, parallel computing, and performance tuning. Strong analytical and problem-solving skills. Familiarity with CI/CD pipelines and DevOps practices is a plus.
Posted 2 weeks ago
8.0 - 13.0 years
4 - 8 Lacs
Hyderabad
Work from Office
This role will be instrumental in building and maintaining robust, scalable, and reliable data pipelines using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. The ideal candidate will have a strong understanding of data streaming concepts, experience with real-time data processing, and a passion for building high-performance data solutions. This role requires excellent analytical skills, attention to detail, and the ability to work collaboratively in a fast-paced environment. Essential Responsibilities Design & develop data pipelines for real time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Required Bachelors degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problemsolving, analytical, and communication skills. Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile
Posted 2 weeks ago
8.0 - 13.0 years
3 - 7 Lacs
Hyderabad
Work from Office
We are seeking an experienced SQL Developer with a strong focus on SQL Server Integration Services (SSIS) to join our team. In this role, you will leverage your expertise in SSIS and SQL programming to design, develop, and optimize data integration solutions. You will collaborate closely with cross-functional teams, including data engineers, data analysts, and business stakeholders, to ensure data workflows are efficient and meet business requirements.
Posted 2 weeks ago
8.0 - 13.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Key Responsibilities Set up and maintain monitoring dashboards for ETL jobs using Datadog, including metrics, logs, and alerts. Monitor daily ETL workflows and proactively detect and resolve data pipeline failures or performance issues. Create Datadog Monitors for job status (success/failure), job duration, resource utilization, and error trends. Work closely with Data Engineering teams to onboard new pipelines and ensure observability best practices. Integrate Datadog with tools. Conduct root cause analysis of ETL failures and performance bottlenecks. Tune thresholds, baselines, and anomaly detection settings in Datadog to reduce false positives. Document incident handling procedures and contribute to improving overall ETL monitoring maturity. Participate in on call rotations or scheduled support windows to manage ETL health. Required Skills & Qualifications 3+ years of experience in ETL/data pipeline monitoring, preferably in a cloud or hybrid environment. Proficiency in using Datadog for metrics, logging, alerting, and dashboards. Strong understanding of ETL concepts and tools (e.g., Airflow, Informatica, Talend, AWS Glue, or dbt). Familiarity with SQL and querying large datasets. Experience working with Python, Shell scripting, or Bash for automation and log parsing. Understanding of cloud platforms (AWS/GCP/Azure) and services like S3, Redshift, BigQuery, etc. Knowledge of CI/CD and DevOps principles related to data infrastructure monitoring. Preferred Qualifications Experience with distributed tracing and APM in Datadog. Prior experience monitoring Spark, Kafka, or streaming pipelines. Familiarity with ticketing tools (e.g., Jira, ServiceNow) and incident management workflows.
Posted 2 weeks ago
6.0 - 11.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Sr Developer with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.
Posted 2 weeks ago
8.0 - 13.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Responsibilities Design and Develop Scalable Data PipelinesBuild and maintain robust data pipelines using Python to process, transform, and integrate large-scale data from diverse sources. Orchestration and AutomationImplement and manage workflows using orchestration tools such as Apache Airflow to ensure reliable and efficient data operations. Data Warehouse ManagementWork extensively with Snowflake to design and optimize data models, schemas, and queries for analytics and reporting. Queueing SystemsLeverage message queues like Kafka, SQS, or similar tools to enable real-time or batch data processing in distributed environments. CollaborationPartner with Data Science, Product, and Engineering teams to understand data requirements and deliver solutions that align with business objectives. Performance OptimizationOptimize the performance of data pipelines and queries to handle large scales of data efficiently. Data Governance and SecurityEnsure compliance with data governance and security standards to maintain data integrity and privacy. DocumentationCreate and maintain clear, detailed documentation for data solutions, pipelines, and workflows. Qualifications Required Skills: 5+ years of experience in data engineering roles with a focus on building scalable data solutions. Proficiency in Python for ETL, data manipulation, and scripting. Hands-on experience with Snowflake or equivalent cloud-based data warehouses. Strong knowledge of orchestration tools such as Apache Airflow or similar. Expertise in implementing and managing messaging queues like Kafka, AWS SQS, or similar. Demonstrated ability to build and optimize data pipelines at scale, processing terabytes of data. Experience in data modeling, data warehousing, and database design. Proficiency in working with cloud platforms like AWS, Azure, or GCP. Strong understanding of CI/CD pipelines for data engineering workflows. Experience working in an Agile development environment, collaborating with cross-functional teams. Preferred Skills: Familiarity with other programming languages like Scala or Java for data engineering tasks. Knowledge of containerization and orchestration technologies (Docker, Kubernetes). Experience with stream processing frameworks like Apache Flink. Experience with Apache Iceberg for data lake optimization and management. Exposure to machine learning workflows and integration with data pipelines. Soft Skills: Strong problem-solving skills with a passion for solving complex data challenges. Excellent communication and collaboration skills to work with cross-functional teams. Ability to thrive in a fast-paced, innovative environment.
Posted 2 weeks ago
4.0 - 9.0 years
4 - 8 Lacs
Hyderabad
Work from Office
SkillData Engineer RoleT3, T2 Key responsibility Data Engineer Must have 5+ years of experience in below mentioned skills. Must HaveBig Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to HaveEvent-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora
Posted 2 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
Design, develop, and maintain ETL processes using Talend. Manage and optimize data pipelines on Amazon Redshift. Implement data transformation workflows using DBT (Data Build Tool). Write efficient, reusable, and reliable code in PySpark. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in data engineering. : Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or similar role. High proficiency in Talend. Strong experience with Amazon Redshift. Expertise in DBT and PySpark. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms and services. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with other data engineering tools and frameworks. Knowledge of machine learning frameworks and libraries.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France