Jobs
Interviews

479 Data Lake Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

14 - 18 Lacs

Noida

Work from Office

Who We Are Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and s ustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you. What you need * BS in an Engineering or Science discipline, or equivalent experience * 7+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 5 years' experience in a data focused role * Experience in data integration (ETL/ELT) development using multiple languages (e.g., Java, Scala, Python, PySpark, SparkSQL) * Experience building and maintaining data pipelines supporting a variety of integration patterns (batch, replication/CD C, event streaming) and data lake/warehouse in production environments * Experience with AWS-based data services technologies (e.g., Kinesis, Glue, RDS, Athena, etc.) and Snowflake CDW * Experience of working in the larger initiatives building and rationalizing large scale data environments with a large variety of data pipelines, possibly with internal and external partner integrations, would be a plus * Willingness to experiment and learn new approaches and technology applications * Knowledge and experience with various relational databases and demonstrable proficiency in SQL and supporting analytics uses and users * Knowledge of software engineering and agile development best practices * Excellent written and verbal communication skills The Brightly culture Were guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiousl y making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make optimal use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective processes. We are looking for Sr. AWS Cloud Architect Architect and Design Develop scalable and efficient data solutions using AWS services such as AWS Glue, Amazon Redshift, S3, Kinesis(Apache Kafka), DynamoDB, Lambda, AWS Glue(Streaming ETL) and EMR Integration Integrate real-time data from various Siemens organizations into our data lake, ensuring seamless data flow and processing. Data Lake Management Design and manage a large-scale data lake using AWS services like S3, Glue, and Lake Formation. Data Transformation Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Snowflake Integration Implement and manage data pipelines to load data into Snowflake, utilizing Iceberg tables for optimal performance and flexibility. Performance Optimization Optimize data processing pipelines for performance, scalability, and cost-efficiency. Security and Compliance Ensure that all solutions adhere to security best practices and compliance requirements. Collaboration Work closely with cross-functional teams, including data engineers, data scientists, and application developers, to deliver end-to-end solutions. Monitoring and Troubleshooting Implement monitoring solutions to ensure the reliability and performance of data pipelines. Troubleshoot and resolve any issues that arise. Youd describe yourself as: Experience 8+ years of experience in data engineering or cloud solutioning, with a focus on AWS services. Technical Skills Proficiency in AWS services such as AWS API, AWS Glue, Amazon Redshift, S3, Apache Kafka and Lake Formation. Experience with real-time data processing and streaming architectures. Big Data Querying Tools: Strong knowledge of big data querying tools (e.g., Hive, PySpark). Programming Strong programming skills in languages such as Python, Java, or Scala for building and maintaining scalable systems. Problem-Solving Excellent problem-solving skills and the ability to troubleshoot complex issues. Communication Strong communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Certifications AWS certifications are a plus. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. Find out more about Siemens careers at

Posted 3 weeks ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Glue Good to have skills : Microsoft SQL Server, Python (Programming Language), Data EngineeringMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education:Developing a customer insights platform that will provide an ID graph and Digital customer view to help drive improvements in marketing decisions. Responsibilities:Design, build, and maintain data pipelines using AWS services (Glue, Neptune, S3).Participate in code reviews, testing, and optimization of data pipelines.Collaborate with stakeholders to understand data requirements and translate into technical solutions.:Proven experience as a Senior Data Engineer / Data Architect, or similar role.Knowledge of data governance and security practices.Extensive experience with data lake technologies (NiFi, Spark, Hive Metastore, Object Storage, Delta Lake Framework)Extensive experience with AWS cloud services, including AWS Glue, Neptune, S3 and LambdaExperience with AWS Neptune or other graph database technologies.Experience in data modelling and design.Experience with event driven architectureExperience with PythonExperience with SQLStrong problem-solving skills and attention to detail.Excellent communication and teamwork skills. Nice to have:Experience with observability solutions (Splunk, New Relic)Experience with Infrastructure as Code (Terraform, CloudFormation)Experience with CICD (Jenkins)Experience with KubernetesFamiliarity with data visualization tools.Support Engineer Similar skills as the above, but with more of a support focus, able to troubleshoot, patch and upgrade, minor enhancements and fixes to the infrastructure and pipelines. Experience with observability, cloudwatch, new relic and monitoring. Qualification 15 years full time education

Posted 3 weeks ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Bengaluru

Work from Office

About the Role We are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data quality and availability across the organization. This role requires a strong background in big data ecosystems, cloud-native tools, and advanced data processing techniques. The ideal candidate has hands-on experience with data ingestion, transformation, and optimization on the Cloudera Data Platform, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights. Responsibilities Data Pipeline DevelopmentDesign, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data IngestionImplement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and ProcessingUse PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance OptimizationConduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and ValidationImplement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and OrchestrationAutomate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Education and Experience Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills PySparkAdvanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data PlatformStrong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data WarehousingKnowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data TechnologiesFamiliarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and SchedulingExperience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and AutomationStrong scripting skills in Linux.

Posted 3 weeks ago

Apply

9.0 - 14.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Kafka Data Engineer Data Engineer to build and manage data pipelines that support batch and streaming data solutions. The role requires expertise in creating seamless data flows across platforms like Data Lake/Lakehouse in Cloudera, Azure Databricks, Kafka for both batch and stream data pipelines etc. Responsibilities Strong experience in develop, test, and maintain data pipelines (batch & stream) using Cloudera, Spark, Kafka and Azure services like ADF, Cosmos DB, Databricks, NoSQL DB/ Mongo DB etc. Strong programming skills in spark, python or scala & SQL. Optimize data pipelines to improve speed, performance, and reliability, ensuring that data is available for data consumers as required. Create ETL pipelines for downstream consumers by transform data as per business logic. Work closely with Data Architects and Data Analysts to align data solutions with business needs and ensure the accuracy and accessibility of data. Implement data validation checks and error handling processes to maintain high data quality and consistency across data pipelines. Strong analytical and problem solving skills, with a focus on optimizing data flows and addressing impacts in the data pipeline. Qualifications 8+ years of IT experience with at least 5+ years in data engineering and cloud-based data platforms. Strong experience with Cloudera/any Data Lake, Confluent/Apache Kafka, and Azure Data Services (ADF, Databricks, Cosmos DB). Deep knowledge of NoSQL databases (Cosmos DB, MongoDB) and data modeling for performance and scalability. Proven expertise in designing and implementing batch and streaming data pipelines using Databricks, Spark, or Kafka. Experience in creating scalable, reliable, and high-performance data solutions with robust data governance policies. Strong collaboration skills to work with stakeholders, mentor junior Data Engineers, and translate business needs into actionable solutions. Bachelors or masters degree in computer science, IT, or a related field.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

5 - 10 Lacs

Hyderabad

Work from Office

6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

5 - 10 Lacs

Mumbai

Work from Office

Sr Developer with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

5 - 10 Lacs

Hyderabad

Work from Office

Sr Developer with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

5 - 9 Lacs

Pune

Work from Office

Responsibilities / Qualifications: Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Glue Data Catalog, Lake formation, Apache Airflow, Lambda, etc Experience with development of data governance framework including the management of data, operating model, data policies and standards. Experience with orchestration of workflows in an enterprise environment. Working experience with Agile Methodology Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.

Posted 3 weeks ago

Apply

12.0 - 17.0 years

13 - 18 Lacs

Hyderabad

Work from Office

1. Data Engineer Azure Data Services 2. Data Modelling NO SQL and SQL 3. Good understanding of Spark, park stream 4. Hands on with Python Pandas / Data Factory / Cosmos DB / Data Bricks / Event Hubs / Stream Analytics 5. Knowledge of medallion architecture, data vaults, data marts etc. 6. Preferably Azure Data associate exam certified.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

5 - 10 Lacs

Bengaluru

Work from Office

6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

8 - 12 Lacs

Hyderabad

Work from Office

10+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Sr Developer with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.

Posted 3 weeks ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Seeking a skilled Data Engineer to work on cloud-based data pipelines and analytics platforms. The ideal candidate will have hands-on experience in PySpark and AWS, with proficiency in designing Data Lakes and working with modern data orchestration tools. Data Engineer to work on cloud-based data pipelines and analytics platforms PySpark and AWS, with proficiency in designing Data Lakes working with modern data orchestration tools

Posted 3 weeks ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Mumbai

Work from Office

Sr Devloper with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Hyderabad

Work from Office

1. 6+ years of experience as a DevOps/Build & Release Engineer framework in application configurations, code compilation, packaging, building, managing and releasing code from one environment to another environment. 2. Proficient with container systems like Docker and container orchestration like EC2 Container Service, Kubernetes and Managed Docker orchestration and Docker containerization using Kubernetes. 3. Experienced working as a DevOps Engineer on various technologies/applications like SVN, GIT, Ant, Maven, Artifactory, Jenkins, Open Shift Containers(OCP), Chef, Docker, Kubernetes, Azure cloud, App services, Function Apps, Storage accounts, Data Lake, Event Hubs, Event Grids, Azure DevOps, and AWS DevOps. 4. Experienced in version control tools like TFS, VSTS, SVN, Bitbucket and GITHUB. 5. Extensive experience using MAVEN, ANT as build tools for the building of deployable Artifacts from source code 6. Experience worked on Groovy, Jenkins, Azure DevOps, AWS DevOps, Build Pipelines, Release pipelines for continuous integration and for End-to-End automation for all build and deployments. Version Control SystemsGIT, Subversion (SVN) Operating SystemsRHEL Linux CI/CD ToolsJenkins, Artifactory, Nexus, Azure DevOps, kafka ContainersDocker, Kubernetes, Packer, OCP(Open shift container) Scripting LanguagesJava scripting, Unix Shell scripting, PowerShell Web ServersTomcat, Apache, IIS, JBOSS, Spring boot Cloud PlatformsAzure, AWS DatabasesOracle, Mongo DB, Couchbase Project Management ToolsMS Office, MS Project Bug Tracking Tools JIRA

Posted 3 weeks ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Seeking a skilled Data Engineer to work on cloud-based data pipelines and analytics platforms. The ideal candidate will have hands-on experience in PySpark and AWS, with proficiency in designing Data Lakes and working with modern data orchestration tools. Data Engineer to work on cloud-based data pipelines and analytics platforms PySpark and AWS, with proficiency in designing Data Lakes working with modern data orchestration tools

Posted 3 weeks ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Mumbai, Pune

Hybrid

Role: Cloud Data Architect Experience: 8yrs to 12yrs Location: Mumbai/Pune As a Cloud Data Architect, you will design, implement, and evangelize scalable, secure data architectures in a cloud environment. In addition to driving technical excellence in client delivery, you will collaborate with our sales and pre-sales teams to develop reusable assets, accelerators, and artifacts that support RFP responses and help win new business. Role Summary • Architect, design, and deploy end-to-end cloud data solutions while also serving as a technical advisor in sales engagements. • Create accelerators, solution blueprints, and artifacts that can be leveraged for client proposals and RFP responses. • Collaborate across multiple teams to ensure seamless integration of technical solutions with client delivery and business goals. Key Skills / Technologies • Must-Have: o Cloud Platforms (AWS, Azure, or Google Cloud) o Data Warehousing & Data Lakes (Redshift, BigQuery, Snowflake, etc.) o Big Data Technologies (Hadoop, Spark, Kafka) o SQL & NoSQL databases o ETL/ELT tools and pipelines o Data Modeling & Architecture design • Good-to-Have: o Infrastructure-as-Code (Terraform, CloudFormation) o Containerization & orchestration (Docker, Kubernetes) o Programming languages (Python, Java, Scala) o Data Governance and Security best practices Responsibilities • Technical Architecture & Delivery: o Design and build cloud-based data platforms, including data lakes, data warehouses, and real-time data pipelines. o Ensure data quality, consistency, and security across all systems. o Work closely with cross-functional teams to integrate diverse data sources into a cohesive architecture. • Sales & Pre-Sales Support: o Develop technical assets, accelerators, and reference architectures that support RFP responses and sales proposals. o Collaborate with sales teams to articulate technical solutions and demonstrate value to prospective clients. o Present technical roadmaps and participate in client meetings to support business development efforts. • Cross-Functional Collaboration: o Serve as a liaison between technical delivery teams and sales teams, ensuring alignment of strategies and seamless client hand-offs. o Mentor team members and lead technical discussions on architecture best practices. Required Qualifications • Bachelors or Masters degree in Computer Science, Information Systems, or a related field. • 5+ years of experience in data architecture/design within cloud environments, with a track record of supporting pre-sales initiatives. • Proven experience with cloud data platforms and big data technologies. • Strong analytical, problem-solving, and communication skills. Why Join Us • Influence both our technical roadmap and sales strategy by shaping cuttingedge, cloud-first data solutions. • Work with a dynamic, multi-disciplinary team and gain exposure to high-profile client engagements. • Enjoy a culture of innovation, professional growth, and collaboration with competitive compensation and benefits.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems.The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Senior Process Manager Roles and responsibilities: Collaborate with stakeholders to gather and analyze business requirements. Utilize data skills to extract, transform, and analyze data from various sources. Interpret data to identify trends, patterns, and insights. Generate comprehensive reports to present findings to stakeholders. Document business processes, data flows, and requirements. Assist in the development and implementation of data-driven solutions. Conduct ad-hoc analysis as required to support business initiatives Technical and Functional Skills: Bachelors Degree with 5+ years of experience with 3+ years of hands-on experience as a Business Analyst or similar role. Strong data skills with the ability to manipulate and analyze complex datasets. Proficiency in interpreting data and translating findings into actionable insights. Experience with report generation and data visualization tools. Solid understanding of business processes and data flows. Excellent communication and presentation skills. Ability to work independently and collaboratively in a team environment. Basic understanding of Google Cloud Platform (GCP), Tableau, SQL, and Python is a plus. Certification in Business Analysis or related field. Familiarity with Google Cloud Platform (GCP) services and tools. Experience with Tableau for data visualization. Proficiency in SQL for data querying and manipulation. Basic knowledge of Python for data analysis and automation.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Telangana

Work from Office

Immediate Openings on Azure Data Engineer_Hyderabad_Contract Experience 10 + Years Skills Azure Data Engineer Location Hyderabad Notice Period Immediate . Employment Type Contract 10+ Years of overall experience in support and development. Primary Skills Microsoft Azure Cloud Platform Azure Admin(Good to have), Azure Data Factory (ADF), Azure DataBricks, Azure Synapse Analytics, Azure SQL, Azure DevOps, Python or Python Spark Secondary Skills DataLake, Azure Blob Storage, Azure Data Warehouse as Service (DWaaS) and Azure LogAnalytics, Oracle, Postgres, Microsoft storage explorer, ServiceNow

Posted 3 weeks ago

Apply

8.0 - 10.0 years

8 - 12 Lacs

Chennai, Gurugram

Work from Office

Immediate Openings on Devops Azure _Gurgaon Skill:Devops Azure Notice Period: Immediate . Employment Type: Contract. Job Posting Title: Software Development Specialist (DevOps) Top 3 skills: Microsoft Azure\ Dev Ops \ .Net Work location: Chennai \ Bangalore Shift timings: Involves multiple shifts Experience 7 plus years Qualifications: DevOps skill set with maybe 8-10 yrs and also can manage and provide technical guidance to the current team Skilled on Microsoft Azure cloud - ideally Azure Fundamentals certified OR Computer Science/Information Systems Management degree. Familiar with PaaS and IaaS - VMs, Storage, EventHub, Service Fabric Cluster (SFC), Azure Kubernetes Service (AKS), CosmosDB, SQL Server, IoT Hub, Databricks, KeyVault, Datalake Understand the concept of Internet of Things (IoT) - telemetry, ingestion, processing, data storage, reporting Familiarity with tools - Octopus, Bamboo, Terraform, Azure DevOps, Jenkins, Github, Ansible (Chef and Puppet would be ok as well) Familiarity with container orchestration platform (e.g., Kubernetes) Some experience with Powershell, Bash, Python Understanding the difference between NoSQL and SQL databases, and how to maintain them Understanding of monitoring and logging systems (ELK, Prometheus, Nagios, Zabbix, etc) Independent thinker - why does it break, what can I proactively do to fix it Required Strong English communication (written and oral) skills Release Management: Release Management of new software via Tools Understand release management SOP = QA - Load Test - Stage Environment - PROD Create/Manage monitoring and alerting systems and as needed to meet SLAs Comfortable with both Linux and Windows administration Working in agile teams, build, test and maintain aspects of CICD Pipeline Evangelize with Engineering, Security, and cross functions on Ops Best Practices Firmware release - OTA (over the air) Launch new mobile app / release new version of the existing mobile app - Appstore / Playstore Participate in RCCAs when needed Maintain documentation & best practices (Wiki & Runbooks) Work with teams to set up standard alerts that can be placed in ARMs & CI\CD Support product NPI onboarding Metric gathering on usage & distribution of that data Migration of the service from one platform to another/one service provider to another Participate in early phases of NPIs sprints when Arch tech runways are defined Take part on the tech bridges to support the troubleshooting effort when necessary Periodic audits to ensure no security issues/relevant access to the required team members (user access cleanup) Support continuous delivery of programs in which patches, new versions, and bug fixes are more frequently deployed to end users without sacrificing stability or reliability Support on-call during off-hours crisis Incident Management / Alerting / Monitoring: Responsible for Tier 2 support that includes end-2-end ownership of incidents from the time they enter the service line through closure for connected devices Responsible for 24X7 Major Incident Management support Respond, resolve, & escalate tickets in a timely manner Implement corrective actions needed to mitigate security risks Ensure all tickets requiring follow-up work and/or calls are resolved. (End-2-end incident resolution support) Ensure all the components are within MON purview.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

32 - 45 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Job Title: Data Architect Location: Bangalore ,Hyderabad,Chennai, Pune, Gurgaon - hybrid- 2/3 days WFO Experience: 8+ years Position Overview: We are seeking a highly skilled and strategic Data Architect to design, build, and maintain the organizations data architecture. The ideal candidate will be responsible for aligning data solutions with business needs, ensuring data integrity, and enabling scalable and efficient data flows across the enterprise. This role requires deep expertise in data modeling, data integration, cloud data platforms, and governance practices. Key Responsibilities: Architectural Design: Define and implement enterprise data architecture strategies, including data warehousing, data lakes, and real-time data systems. Data Modeling: Develop and maintain logical, physical, and conceptual data models to support analytics, reporting, and operational systems. Platform Management: Select and oversee implementation of cloud and on-premises data platforms (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Databricks). Integration & ETL: Design robust ETL/ELT pipelines and data integration frameworks using tools such as Apache Airflow, Informatica, dbt, or native cloud services. Data Governance: Collaborate with stakeholders to implement data quality, data lineage, metadata management, and security best practices. Collaboration: Work closely with data engineers, analysts, software developers, and business teams to ensure seamless and secure data access. Performance Optimization: Tune databases, queries, and storage strategies for performance, scalability, and cost-efficiency. Documentation: Maintain comprehensive documentation for data structures, standards, and architectural decisions. Required Qualifications: Bachelors or master’s degree in computer science, Information Systems, or a related field. 5+ years of experience in data architecture, data engineering, or database development. Strong expertise in data modeling, relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). Experience with modern data platforms and cloud ecosystems (AWS, Azure, or GCP). Hands-on experience with data warehousing solutions and tools (e.g., Snowflake, Redshift, BigQuery). Proficiency in SQL and data scripting languages (e.g., Python, Scala). Familiarity with data privacy regulations (e.g., GDPR, HIPAA) and security standards. Tech Stack AWS Cloud – S3, EC2, EMR, Lambda, IAM, Snowflake DB Databricks Spark/Pyspark, Python Good Knowledge of Bedrock and Mistral AI RAG & NLP LangChain and LangRAG LLMs Anthropic Claude, Mistral, LLaMA etc.,

Posted 3 weeks ago

Apply

8.0 - 13.0 years

10 - 14 Lacs

Hyderabad

Work from Office

#Employment Type: Contract Skills Azure Data Factory SQL Azure Blob Azure Logic Apps

Posted 3 weeks ago

Apply

10.0 - 15.0 years

10 - 15 Lacs

Pune

Work from Office

Provides technical expertise, to include addressing and resolving complex technical issues. Demonstrable experience assessing application workloads and technology landscape for Cloud suitability, develop case and Cloud adoption roadmap Expertise on data ingestion, data loading, Data Lake, bulk processing, transformation using Azure services and migrating on-premises services to various Azure environments. Good experience of a range services from the Microsoft Azure Cloud Platform including Infrastructure and Security related services such as Azure AD, IaaS, Containers, Storage, Networking and Azure Security. Good experience of enterprise solution shaping and Microsoft Azure Cloud architecture development including excellent documentation skills. Good understanding of Azure and AWS cloud service offering covering Compute, Storage, Network, WebApp, Functions, Gateway, Clustering, Key Vault, AD. Design and Develop high performance, scalable and secure cloud native applications on Microsoft Azure along with Azure best practices/recommendations. Design, implement and improve possible automations for cloud environments using native or 3rd party tools like Terraform, Salt, Chef, puppet, Databricks, etc. Creating business cases for transformation and modernization, including analysis of both total cost of ownership and potential cost and revenue impacts of the transformation Advise and engage with the customer executives on their Azure and AWS cloud strategy roadmap, improvements, alignment by bringing in industry best practice/trends and work on further improvements with required business case analysis and required presentations Providing Microsoft Azure architecture collaboration with other technical teams Documentation of solutions (e.g. architecture, configuration and setup). Working within a project management/agile delivery methodology in a leading role as part of a wider team. Provide effective knowledge transfer and upskilling to relevant customer personnel to ensure an appropriate level of future self-sufficiency. Assist in transition of projects to Enterprise Services teams. Skills Required: Strong knowledge of Cloud security standards and principles including Identity and Access management in Azure. essential tohave strong, in-depth and demonstrable hands-on experience with the following technologies: Microsoft Azure and its relevant build, deployment, automation, networking and security technologies in cloud and hybrid environments. Azure stack hub , Azure stack HCI/Hyper-V clusters. Microsoft Azure IaaS , Platform As A Service ( PaaS ) products such as Azure SQL, AppServices, Logic Apps, Functions and other Serverless services Understanding of Microsoft Identity and Access Management products such including Azure AD or AD B2C Microsoft Azure Operational and Monitoring tools, including Azure Monitor, App Insights and Log Analytics Microsoft Windows server, System Centr, Hyper-V and storage spaces Knowledge of PowerShell, Git, ARM templates and deployment automation. Hands-on experience on Azure and AWS Cloud native automation framework to perform automation along with experience on Python, Azure services like Databricks, Data factory, Azure functions, Streamsets etc. Hands-on experience with IAC (Infrastructure as Code), Containers, Kubernetes (AKS), Ansible, Terraform, Docker, Linux Sys Admin (RHEL/Ubuntu/Alpine), Jenkins, building CI/CD pipelines in Azure Devops. Ability to define and design the technical architecture with best suited Azure components ensuring seamless end-end workflow from Data source to Power BI/Portal/Dashboards/UI. Skills Good to Have: Experience in building big data solutions using Azure and AWS services like analysis services, DevOps / Databases like SQL server, CosmosDB, Dynamo DB, Mongo DB and web service integration Possession of either the Developing Microsoft Azure Solutions and Architecting Microsoft Azure certifications.

Posted 3 weeks ago

Apply

12.0 - 17.0 years

17 - 22 Lacs

Noida

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice under RMI – Optum Advisory umbrella. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities: Design and implement secure, scalable, and cost-effective cloud data architectures using cloud services such as Azure Data Factory (ADF), Azure Databricks, Azure Storage, Key Vault, Snowflake, Synapse Analytics, MS Fabric/Power BI etc. Define and lead data & cloud strategy, including migration plans, modernization of legacy systems, and adoption of new cloud capabilities Collaborate with clients to understand business requirements and translate them into optimal cloud architecture solutions, balancing performance, security, and cost Evaluate and compare cloud services (e.g., Databricks, Snowflake, Synapse Analytics) and recommend the best-fit solutions based on project needs and organizational goals Lead the full lifecycle of data platform and product implementations, from planning and design to deployment and support Drive cloud migration initiatives, ensuring smooth transition from on-premise systems while engaging and upskilling existing teams Lead and mentor a team of cloud and data engineers, fostering a culture of continuous learning and technical excellence Plan and guide the team in building Proof of Concepts (POCs), exploring new cloud capabilities, and validating emerging technologies Establish and maintain comprehensive documentation for cloud setup processes, architecture decisions, and operational procedures Work closely with internal and external stakeholders to gather requirements, present solutions, and ensure alignment with business objectives Ensure all cloud solutions adhere to security best practices, compliance standards, and governance policies Prepare case studies and share learnings from implementations to build organizational knowledge and improve future projects Building and analyzing data engineering processes and act as an SME to troubleshoot performance issues and suggesting solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions, Maven etc Building test framework for the Databricks notebook jobs for automated testing before code deployment Continuously explore new Azure services and capabilities; assess their applicability to business needs Create detailed documentation for cloud processes, architecture, and implementation patterns Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Identifies solutions to non-standard requests and problems Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 12+ years of overall experience in Data & Analytics engineering 10+ years of solid experience working as an Architect designing data platforms using Azure, Databricks, Snowflake, ADF, Data Lake, Synapse Analytics, Power BI etc. 10+ years of experience working with data platform or product using PySpark and Spark-SQL In-depth experience designing complex Azure architecture for various business needs & ability to come up with efficient design & solutions Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. Experience in leading team and people management Highly proficient and hands-on experience with Azure services, Databricks/Snowflake development etc. Excellent communication and stakeholder management skills Preferred Qualifications: Snowflake, Airflow experience Power BI development experience Eexperience or knowledge of health care concepts – E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes – an enterprise priority reflected in our mission. #NIC External Candidate Application Internal Employee Application

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies