Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
10 - 20 Lacs
Bengaluru
Hybrid
Role & responsibilities Strong hands-on experience with multi cloud (AWS, Azure, GCP) services such as GCP BigQuery, Dataform AWS Redshift, Proficient in PySpark and SQL for building scalable data processing pipelines Knowledge of utilizing serverless technologies like AWS Lambda, and Google Cloud Functions Experience in orchestration frameworks like Apache Airflow, Kubernetes, and Jenkins to manage and orchestrate data pipelines Experience in developing and optimizing ETL/ELT pipelines and working on cloud data warehouse migration projects. Exposure to client-facing roles, with strong problem-solving and communication skills. Prior experience in consulting or working in a consulting environment is preferred.
Posted 2 weeks ago
5.0 - 10.0 years
6 - 15 Lacs
Bengaluru
Work from Office
Urgent Hiring _ Azure Data Engineer with a leading Management Consulting Company @ Bangalore Location. Strong expertise in Databricks & Pyspark while dealing with batch processing or live (streaming) data sources. 4+ relevant years of experience in Databricks & Pyspark/Scala 7+ total years of experience Good in data modelling and designing. Ctc- Hike Shall be considered on Current/Last Drawn Pay Apply - rohita.robert@adecco.com Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill
Posted 2 weeks ago
7.0 - 12.0 years
20 - 35 Lacs
Bengaluru, Malaysia
Work from Office
Core Competences Required and Desired Attributes: Bachelor's degree in computer science, Information Technology, or a related field. Proficiency in Azure Data Factory, Azure Databricks and Unity Catalog, Azure SQL Database, and other Azure data services. Strong programming skills in SQL, Python and PySpark languages. Experience in the Asset Management domain would be preferable. Strong proficiency in data analysis and data modelling, with the ability to extract insights from complex data sets. Hands-on experience in Power BI, including creating custom visuals, DAX expressions, and data modelling. Familiarity with Azure Analysis Services, data modelling techniques, and optimization. Experience with data quality and data governance frameworks with an ability to debug, fine tune and optimise large scale data processing jobs. Strong analytical and problem-solving skills, with a keen eye for detail. Excellent communication and interpersonal skills, with the ability to work collaboratively in a team environment. Proactive and self-motivated, with the ability to manage multiple tasks and deliver high-quality results within deadlines. Roles and Responsibilities Core Competences Required and Desired Attributes: Bachelor's degree in computer science, Information Technology, or a related field. Proficiency in Azure Data Factory, Azure Databricks and Unity Catalog, Azure SQL Database, and other Azure data services. Strong programming skills in SQL, Python and PySpark languages. Experience in the Asset Management domain would be preferable. Strong proficiency in data analysis and data modelling, with the ability to extract insights from complex data sets. Hands-on experience in Power BI, including creating custom visuals, DAX expressions, and data modelling. Familiarity with Azure Analysis Services, data modelling techniques, and optimization. Experience with data quality and data governance frameworks with an ability to debug, fine tune and optimise large scale data processing jobs. Strong analytical and problem-solving skills, with a keen eye for detail. Excellent communication and interpersonal skills, with the ability to work collaboratively in a team environment. Proactive and self-motivated, with the ability to manage multiple tasks and deliver high-quality results within deadlines.
Posted 2 weeks ago
4.0 - 8.0 years
10 - 15 Lacs
Bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities includeComprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficient in .Net Core with React or Angular Experience in Agile teams applying the best architectural, design, unit testing patterns & practices with an eye for code quality and standards. AZURE FUNCTION, AZURE SERVICE BUS, AZURE STORAGE ACCOUNT- MANDATORY AZURE DURABLE FUNCTIONS AZURE DATA FACTORY, AZURE SQL OR COSMOS DB(DATABASE)Required Ability to write calculation rules and configurable consolidation rules. Preferred technical and professional experience Excellent written and verbal interpersonal skills for coordinating across teams. Should have at least 2 end to end implementation experience. Ability to write and update the rules of historical overrides.
Posted 2 weeks ago
4.0 - 9.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined
Posted 2 weeks ago
4.0 - 6.0 years
5 - 9 Lacs
Hyderabad
Work from Office
4-6 years of experience building resilient, highly available and scalable cloud native platforms and solutions. Extensive experience with the .NET framework and other technologies: C#, Web API Experience with using a broad range of Azure services, mainly from the list below: Web Apps, Web jobs, Storage, Azure Key Vault, Blueprint Assignment, Azure Policy, Azure Service Bus. Expertise in creation and usage of ARM Templates is required. Usage and deployment knowledge of Infrastructure as a code using tools such as Terraform is required. Advanced knowledge on IaaS and PaaS services of Azure Knowledge on Monitoring tools (Application Insights) is required. Comprehensive understanding on Azure platform and services Knowledge on IAM Identity and Access Management is needed. APP Insights, Azure SQL DB, Cosmos DB, Functions, Azure Bot Service, Express Route Azure VM, Azure VNet, Azure Active Directory, Azure AD B2C, Azure Analytics Services - Azure Analysis Services, SQL Data Warehouse, Data Factory, Databricks Develop and maintain an Azure based cloud solution, with an emphasis on best practice cloud security. Automating tasks using Azure Dev-Ops and CI/CD Pipeline Expertise in one of the languages such as PowerShell or Python, .Net, C# is preferable. Strong knowledge on Dev-Ops and tools in Azure. Infrastructure and Application monitoring across production and non-production platforms Experience with DevOps Orchestration / Configuration / Continuous Integration Management technologies. Knowledge on hybrid public cloud design concepts Good understanding of High Availability and Disaster Recovery concepts for infrastructure Problem Solving: Ability to analyze and resolve complex infrastructure resource and application deployment issues. Excellent communication skills, understanding customer needs, negotiations skills, Vendor management skills. Education (degree): Bachelor's degree in Computer Science, Business Information Systems or relevant experience and accomplishments Technical Skills 1. Cloud provisioning and management Azure 2. Programming Language C#, .NET Core, PowerShell 3. Web API's
Posted 2 weeks ago
4.0 years
0 Lacs
India
On-site
Mandatory Skills : Azure Cloud Technologies, Azure Data Factory, Azure Databricks (Advance Knowledge), PySpark, CI/CD Pipeline (Jenkins, GitLab CI/CD or Azure DevOps), Data Ingestion, SOL Seeking a skilled Data Engineer with expertise in Azure cloud technologies, data pipelines, and big data processing. The ideal candidate will be responsible for designing, developing, and optimizing scalable data solutions. Responsibilities Azure Databricks and Azure Data Factory Expertise: Demonstrate proficiency in designing, implementing, and optimizing data workflows using Azure Databricks and Azure Data Factory. Provide expertise in configuring and managing data pipelines within the Azure cloud environment. PySpark Proficiency: Possess a strong command of PySpark for data processing and analysis. Develop and optimize PySpark code to ensure efficient and scalable data transformations. Big Data & CI/CD Experience: Ability to troubleshoot and optimize data processing tasks on large datasets. Design and implement automated CI/CD pipelines for data workflows. This involves using tools like Jenkins, GitLab CI/CD, or Azure DevOps to automate the building, testing, and deployment of data pipelines. Data Pipeline Development & Deployment: Design, implement, and maintain end-to-end data pipelines for various data sources and destinations. This includes unit tests for individual components, integration tests to ensure that different components work together correctly, and end-to-end tests to verify the entire pipeline's functionality. Familiarity with Github/Repo for deployment of code Ensure data quality, integrity, and reliability throughout the entire data pipeline. Extraction, Ingestion, and Consumption Frameworks: Develop frameworks for efficient data extraction, ingestion, and consumption. Implement best practices for data integration and ensure seamless data flow across the organization. Collaboration and Communication: Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions. Communicate effectively with stakeholders to gather and clarify data-related requirements. Requirements Bachelor’s or master’s degree in Computer Science, Data Engineering, or a related field. 4+ years of relevant hands-on experience in data engineering with Azure cloud services and advanced Databricks. Strong analytical and problem-solving skills in handling large-scale data pipelines. Experience in big data processing and working with structured & unstructured datasets. Expertise in designing and implementing data pipelines for ETL workflows. Strong proficiency in writing optimized queries and working with relational databases. Experience in developing data transformation scripts and managing big data processing using PySpark.. Skills: sol,azure,azure databricks,sql,pyspark,data ingestion,azure cloud technologies,azure datafactory,azure data factory,ci/cd pipeline (jenkins, gitlab ci/cd or azure devops),azure databricks (advance knowledge),ci/cd pipelines Show more Show less
Posted 2 weeks ago
8.0 - 12.0 years
25 - 30 Lacs
Chennai
Work from Office
Job description Job Title: Manager Data Engineer - Azure Location: Chennai (On-site) Experience: 8 - 12 years Employment Type: Full-Time About the Role We are seeking a highly skilled Senior Azure Data Solutions Architect to design and implement scalable, secure, and efficient data solutions supporting enterprise-wide analytics and business intelligence initiatives. You will lead the architecture of modern data platforms, drive cloud migration, and collaborate with cross-functional teams to deliver robust Azure-based solutions. Key Responsibilities Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Stay current with Azure platform advancements and recommend improvements. Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. • Expertise in data modeling and design (relational, dimensional, NoSQL). • Proven experience with ETL/ELT processes, data lakes, and modern lakehouse architectures. • Proficiency in Python, SQL, Scala, and/or Java. • Strong knowledge of data governance, security, and compliance frameworks. • Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). • Familiarity with BI and analytics tools such as Power BI or Tableau. • Excellent communication, collaboration, and stakeholder management skills. • Bachelors degree in Computer Science, Engineering, Information Systems, or related field. Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership. Why Join Us? • Work on cutting-edge cloud data platforms in a collaborative, innovative environment. • Lead strategic data initiatives that impact enterprise-wide decision-making. • Competitive compensation and opportunities for professional growth.
Posted 3 weeks ago
9.0 - 14.0 years
10 - 16 Lacs
Chennai
Work from Office
Azure Data Bricks, Data Factory, Pyspark, Sql If Your Interst in this position Attached your CV to this Mail ID muniswamyinfyjob@gmail.com
Posted 3 weeks ago
6.0 - 9.0 years
4 - 8 Lacs
Pune
Work from Office
Your Role As a senior software engineer with Capgemini, you will have 6 + years of experience in Azure technology with strong project track record In this role you will play a key role in: Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence and entrepreneurial spirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming Experience with Azure Databricks/ADB is must have Experience with building CI/CD pipelines in Data environments
Posted 3 weeks ago
7.0 - 12.0 years
10 - 15 Lacs
Bangalore Rural
Work from Office
A candidate with distributed computer understanding and experience with SQL, Spark, ETL. Experience using databases like MySQL DB, Postegre, SQL, OracleAWS or Datafactory based ETL on Azure is required.
Posted 3 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Contractual Hiring manager :- My profile :- linkedin.com/in/yashsharma1608 Payroll of :- https://www.nyxtech.in/ 1. AZURE DATA ENGINEER WITH FABRIC The Role : Lead Data Engineer PAYROLL Client - Brillio About Role: Experience 6 to 8yrs Location- Bangalore , Hyderabad , Pune , Chennai , Gurgaon (Hyderabad is preferred) Notice- 15 days / 30 days. Budget -15 LPA AZURE FABRIC EXP MANDATE Skills : Azure Onelake, datapipeline , Apache Spark , ETL , Datafactory , Azure Fabric , SQL , Python/Scala. Key Responsibilities: Data Pipeline Development: Lead the design, development, and deployment of data pipelines using Azure OneLake, Azure Data Factory, and Apache Spark, ensuring efficient, scalable, and secure data movement across systems. ETL Architecture: Architect and implement ETL (Extract, Transform, Load) workflows, optimizing the process for data ingestion, transformation, and storage in the cloud. Data Integration: Build and manage data integration solutions that connect multiple data sources (structured and unstructured) into a cohesive data ecosystem. Use SQL, Python, Scala, and R to manipulate and process large datasets. Azure OneLake Expertise: Leverage Azure OneLake and Azure Synapse Analytics to design and implement scalable data storage and analytics solutions that support big data processing and analysis. Collaboration with Teams: Work closely with Data Scientists, Data Analysts, and BI Engineers to ensure that the data infrastructure supports analytical needs and is optimized for performance and accuracy. Performance Optimization: Monitor, troubleshoot, and optimize data pipeline performance to ensure high availability, fast processing, and minimal downtime. Data Governance & Security: Implement best practices for data governance, data security, and compliance within the Azure ecosystem, ensuring data privacy and protection. Leadership & Mentorship: Lead and mentor a team of data engineers, promoting a collaborative and high-performance team culture. Oversee code reviews, design decisions, and the implementation of new technologies. Automation & Monitoring: Automate data engineering workflows, job scheduling, and monitoring to ensure smooth operations. Use tools like Azure DevOps, Airflow, and other relevant platforms for automation and orchestration. Documentation & Best Practices: Document data pipeline architecture, data models, and ETL processes, and contribute to the establishment of engineering best practices, standards, and guidelines. C Innovation: Stay current with industry trends and emerging technologies in data engineering, cloud computing, and big data analytics, driving innovation within the team.C
Posted 3 weeks ago
5.0 - 8.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1628_JOB Date Opened 09/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Data Engineer City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Noida, Hyderabad, Delhi / NCR
Work from Office
Job Role: Azure Data Engineer Location: Greater Noida & Hyderabad Experience: 5 to 10 years Notice Period: Immediate to 30 days Job Description: Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training Many years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3 years Data Engineering experience using SQL 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Azure Data Factory, Azure Databricks, Azure Blob Storage, Azure Power Apps, and Azure Functions e. CI/CD: GitHub, Azure DevOps, Terraform
Posted 3 weeks ago
6.0 - 11.0 years
20 - 22 Lacs
Hyderabad, Bengaluru
Work from Office
Detailed job description - Skill Set: 6+ years of experience in Database and Data warehouse tech (Azure Synapse/ADB, ADF/SQL Server/SAP HANA), Experience in ADF, ADB, Azure Synapse Solid knowledge on Python, Spark, SQL Experience with Power Shell Scripting Effectively analyzes the heterogeneous source data and writes SQL scripts to integrate data from multiple data sources Statistical Analysis and assess the results to generate actionable insights and presents the findings to the business users for informed decision making Performs Data mining which provides actionable data in response to changing business requirements. Adapts to the changing business requirements and supports the development and implementation of best-known methods with respect to data analytics Mandatory Skills Must Have Skills (Top 3 technical skills only) * Experience in ADF, ADB, Azure Synapse Solid knowledge on Python, PySpark, SQL Experience with Power Shell Scripting
Posted 3 weeks ago
12.0 - 15.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Master Data Management & Architecture Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application requirements are met, overseeing the development process, and providing guidance to team members. You will also engage in problem-solving activities, ensuring that solutions are effectively implemented across multiple teams, while maintaining a focus on quality and efficiency in application delivery. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure alignment with strategic goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Master Data Management & Architecture.- Strong understanding of data governance principles and practices.- Experience with data integration techniques and tools.- Ability to design and implement data models that support business processes.- Familiarity with data quality management and data lifecycle management. Additional Information:- The candidate should have minimum 12 years of experience in SAP Master Data Management & Architecture.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 5-10 yrs Location: Gurugram/Bangalore/Pune Skill: Azure Data Engineer Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Databricks: Rel Exp in Pyspark: Rel Exp in DWH: Rel Exp in Python: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:
Posted 4 weeks ago
3.0 - 6.0 years
1 - 6 Lacs
Gurugram
Work from Office
Role & responsibilities Design, develop, and maintain scalable Python applications for data processing and analytics. Build and manage ETL pipelines using Databricks on Azure/AWS cloud platforms. Collaborate with analysts and other developers to understand business requirements and implement data-driven solutions. Optimize and monitor existing data workflows to improve performance and scalability. Write clean, maintainable, and testable code following industry best practices. Participate in code reviews and provide constructive feedback. Maintain documentation and contribute to project planning and reporting. Skills & Experience Bachelor's degree in Computer Science, Engineering, or related field Prior experience as a Python Developer or similar role, with a strong portfolio showcasing your past projects. 2-5 years of Python experience Strong proficiency in Python programming. Hands-on experience with Databricks platform (Notebooks, Delta Lake, Spark jobs, cluster configuration, etc.). Good knowledge of Apache Spark and its Python API (PySpark). Experience with cloud platforms (preferably Azure or AWS) and working with Databricks on cloud. Familiarity with data pipeline orchestration tools (e.g., Airflow, Azure Data Factory, etc.).
Posted 4 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Back At BCE Global Tech, immerse yourself in exciting projects that are shaping the future of both consumer and enterprise telecommunications This involves building innovative mobile apps to enhance user experiences and enable seamless connectivity on-the-go Thrive in diverse roles like Full Stack Developer, Backend Developer, UI/UX Designer, DevOps Engineer, Cloud Engineer, Data Science Engineer, and Scrum Master; at a workplace that encourages you to freely share your bold and different ideas If you are passionate about technology and eager to make a difference, we want to hear from you! Apply now to join our dynamic team in Bengaluru We are seeking a talented Site Reliability Engineer (SRE) to join our team The ideal candidate will have a strong background in software engineering and systems administration, with a passion for building scalable and reliable systems As an SRE, you will collaborate with development and operations teams to ensure our services are reliable, performant, and highly available Key Responsibilities "Ensure the 24/7 operations and reliability of data services in our production GCP and on-premise Hadoop environments Collaborate with the data engineering development team to design, build, and maintain scalable, reliable, and secure data pipelines and systems Develop and implement monitoring, alerting, and incident response strategies to proactively identify and resolve issues before they impact production Drive the implementation of security and reliability best practices across the software development life cycle Contribute to the development of tools and automation to streamline the management and operation of data services Participate in on-call rotation and respond to incidents in a timely and effective manner Continuously evaluate and improve the reliability, scalability, and performance of data services" Technology Skills "4+ years of experience in site reliability engineering or a similar role Strong experience with Google Cloud Platform (GCP) services, including BigQuery, Dataflow, Pub/Sub, and Cloud Storage Experience with on-premise Hadoop environments and related technologies (HDFS, Hive, Spark, etc ) Proficiency in at least one programming language (Python, Scala, Java, Go, etc ) Required qualifications to be successful in this role Bachelors degree in computer science engineering, or related field 8 -10 years of experience as a SRE Proven experience as an SRE, DevOps engineer, or similar role Strong problem-solving skills and ability to work under pressure Excellent communication and collaboration skills Flexible to work in EST time zones ( 9-5 EST) Additional Information Job Type Full Time Work ProfileHybrid (Work from Office/ Remote) Years of Experience8-10 Years LocationBangalore What We Offer Competitive salaries and comprehensive health benefits Flexible work hours and remote work options Professional development and training opportunities A supportive and inclusive work environment
Posted 4 weeks ago
6.0 - 10.0 years
20 - 27 Lacs
Bengaluru
Remote
Greetings!!! Position: Azure Data Engineer Budget: 28.00 LPA Type: FTE/Lateral Location: Pan India(WFH) Experience: 6 - 10 Yrs. JOB DESCRIPTION Azure Data engineer Must have skills - Azure ,Databricks,Datafactory,Python ,Pyspark
Posted 4 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Mumbai
Work from Office
Design and implement data solutions using Microsoft Azure and Databricks platforms. You will work with cloud-based tools for data engineering, analysis, and machine learning. Expertise in Azure, Databricks, and cloud data solutions is required.
Posted 4 weeks ago
4.0 - 8.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Software Engineering Senior Analyst ABOUT EVERNORTH: Evernorthexists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care,we solve the problems others don’t, won’t or can’t. Excited to grow your career We value our talented employees, and whenever possible strive to help one of our associates grow professionally before recruiting new talent to our open positions. If you think the open position you see is right for you, we encourage you to apply! Our people make all the difference in our success. We are looking for engineer to develop, optimize and fine-tune AI models for performance, scalability, and accuracy. In this role you will support the full software lifecycle of design, development, testing, and support for technical delivery. This role requires working with both onsite and offshore team members in properly defining testable scenarios based on requirements/acceptance criteria. Responsibilities Participate in daily stand-up meeting to verify all the ongoing tickets status. Estimating data ingestion work in datalake based on entity count and complexity Work on designing suitable Azure cloud data management solutions to address the business stakeholder’s needs with regards to their data ingestion, processing, and transmission to downstream systems Participate in discussion with team to understand requirement to ingest and transform data into datalake and make available processed data to different target databases Develop ingestion code to ingest the data from different sources in datalake Export the processed data to target databases so that they can use data in reporting Optimize the data ingestion or data transformation workflows Optimizing long running jobs Develop Azure migration flow and Azure databricks jobs for LakeHouse Keep track of job after deployment and identify performance bottlenecks, failures, data growth Track support ticket, triage, fix and deploy Responsible for monitoring and execution of nightly ETL process which loads data to Azure data warehouse system Responsible for on-boarding the new clients for Member Model, Remittance and Paid Claims Prepare Root Cause Analysis document and suggest solutions to mitigate future re-occurrence for the issue Qualifications Required Skills: Minimum of 3-5 years of professional experience Experience administering the following Data Warehouse Architecture Components: 2+ years with Azure Technologies 2+ years with Azure - Data Factory(ADF), ADLSGen2, Storage Account, Lakehouse Analytics, Synapse, SQL DB, Databricks 2+ years with SQL Server, Python, Scala, SSIS, SSRS Understanding of Data access, Data Retention, and archiving Good hands-on experience on troubleshooting data error and ETL jobs Good understanding of ETL process and agile framework Good Communication skills Required Experience & Education: Software engineer (with 3-5 years of overall experience) with at-least 2 years in the key skills listed above B achelor’s degree equivalent in Computer Science, equivalent preferred. Location & Hours of Work: HIH-Hyderabad & General Shift(11:30 AM - 8:30 PM IST) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 4 weeks ago
5.0 - 8.0 years
11 - 16 Lacs
Hyderabad
Work from Office
Software Engineering Senior Analyst ABOUT EVERNORTH: Evernorthexists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care,we solve the problems others don’t, won’t or can’t. We are always looking upward. And that starts with finding the right talent to help us get there. Position Overview Responsibilities Participate in daily stand-up meeting to verify all the ongoing tickets status. Estimating data ingestion work in datalake based on entity count and complexity Work on designing suitable Azure cloud data management solutions to address the business stakeholder’s needs with regards to their data ingestion, processing, and transmission to downstream systems Participate in discussion and lead team to understand requirement to ingest and transform data into datalake and make available processed data to different target databases Review developed ingestion code to ingest the data from different sources in datalake Review and perform impact analysis on proposed solutions for Optimizing long running jobs Keep track of job after deployment and identify performance bottlenecks, failures, data growth Track support ticket, triage, fix and deploy Review prepared Root Cause Analysis document Firm grasp on the processes and standard operation procedures and influencing the fellow team members in following them. Engaged in fostering and improving organizational culture. Qualifications Required Skills: Minimum of 5-8 years of professional experience Experience administering the following Data Warehouse Architecture Components: 5+ years with Azure Technologies 5+ years with Azure - Data Factory(ADF), ADLSGen2, Storage Account, Lakehouse Analytics, Synapse, SQL DB, Databricks 5+ years with SQL Server, Python, Scala, SSIS, SSRS Understanding of Data access, Data Retention, and archiving Good hands on experience on troubleshooting data error and ETL jobs Good understanding of ETL process and agile framework Good Communication skills Required Experience & Education: Software engineer (with 5-8 years of overall experience) with at-least 5 years in the key skills listed above Bachelor’s degree equivalent in Information Technology, Computer Science, Technology Management, or related field of study. Location & Hours of Work: Hyderabad and Hybrid (11:30 AM IST to 8:30 PM IST) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 4 weeks ago
2.0 - 5.0 years
4 - 8 Lacs
Kolkata
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role Develop and maintain data pipelines tailored to Azure environments, ensuring security and compliance with client data standards. Collaborate with cross-functional teams to gather data requirements, translate them into technical specifications, and develop data models. Leverage Python libraries for data handling, enhancing processing efficiency and robustness. Ensure SQL workflows meet client performance standards and handle large data volumes effectively. Build and maintain reliable ETL pipelines, supporting full and incremental loads and ensuring data integrity and scalability in ETL processes. Implement CI/CD pipelines for automated deployment and testing of data solutions. Optimize and tune data workflows and processes to ensure high performance and reliability. Monitor, troubleshoot, and optimize data processes for performance and reliability. Document data infrastructure, workflows, and maintain industry knowledge in data engineering and cloud tech. Your Profile Bachelors degree in computer science, Information Systems, or a related field 4+ years of data engineering experience with a strong focus on Azure data services for client-centric solutions. Extensive expertise in Azure Synapse, Data Lake Storage, Data Factory, Databricks, and Blob Storage, ensuring secure, compliant data handling for clients. Good interpersonal communication skills Skilled in designing and maintaining scalable data pipelines tailored to client needs in Azure environments. Proficient in SQL and PL/SQL for complex data processing and client-specific analytics. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 4 weeks ago
3.0 years
2 - 10 Lacs
Gurgaon
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Onboard clients via the components of our data engineering pipeline, which consists of UIs, Azure Databricks, Azure ServiceBus, Apache Airflow, and various container-based services configured through IUs, SQL, PL/SQL, Python, YAML, Node, and Shell, with code managed in GitHub, deployed through Jenkins, and monitored through Prometheus, Grafana Work as a part of our client implementation team to ensure the highest standards of product configuration that meet client requirements Test and troubleshoot data pipeline using sample and live client data. Utilize Jenkins, Python, Groovy Scripts and Java to automate these tests. Must be able to parse logs to determine next actions. Work with product teams to ensure the product is configured appropriately Utilize dashboards for Kubernetes/OpenShift to diagnose high level issues and ensure services are healthy Support Implementation immediately after go live, work with O&M team to transition support to that team Participate in daily AGILE meetings Estimate project deliverables Configure and test REST APIs and utilize manual tools to interact with API’s Work with data providers to clarify requirements and remove roadblocks Drive automation into everyday activities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 3+ years of experience working with SQL (preferably Oracle Pl/SQL and SparkSQL) and data at scale 3+ years of ETL experience ensuring source to target data integrity. Familiar with various file types (Delimited Text, Fixed Width, XML, JSON, Parque) 1+ years of coding experience with one or more of the follow languages; Java, C#, Python, NodeJS using Git, with practical experience with working collaboratively through Git branching strategies 1+ years of experience with Microsoft Azure cloud infrastructure, DataBricks, DataFactory, DataLake, Airflow and Cosmos Database 1+ years of experience in reading and configuring YAML 1+ years of experience with ServiceBus, setting up ingress and egress within a subscription, or relevant Azure Cloud services administrative experience 1+ years of experience with Unit Testing, Code Quality tools, CI/CD Technologies, Security and Container Technologies 1+ years of Agile development experience and knowledge of Agile ceremonies and practices At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane