Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
5 - 8 Lacs
Pune
Work from Office
Supports, develops, and maintains a data and analytics platform. Effectively and efficiently processes, stores, and makes data available to analysts and other consumers. Works with Business and IT teams to understand requirements and best leverage technologies to enable agile data delivery at scale. Note:- Although the role category in the GPP is listed as Remote, the requirement is for a Hybrid work model. Key Responsibilities: Oversee the development and deployment of end-to-end data ingestion pipelines using Azure Databricks, Apache Spark, and related technologies. Design high-performance, resilient, and scalable data architectures for data ingestion and processing. Provide technical guidance and mentorship to a team of data engineers. Collaborate with data scientists, business analysts, and stakeholders to integrate various data sources into the data lake/warehouse. Optimize data pipelines for speed, reliability, and cost efficiency in an Azure environment. Enforce and advocate for best practices in coding standards, version control, testing, and documentation. Work with Azure services such as Azure Data Lake Storage, Azure SQL Data Warehouse, Azure Synapse Analytics, and Azure Blob Storage. Implement data validation and data quality checks to ensure consistency, accuracy, and integrity. Identify and resolve complex technical issues proactively. Develop reliable, efficient, and scalable data pipelines with monitoring and alert mechanisms. Use agile development methodologies, including DevOps, Scrum, and Kanban. External Qualifications and Competencies Technical Skills: Expertise in Spark, including optimization, debugging, and troubleshooting. Proficiency in Azure Databricks for distributed data processing. Strong coding skills in Python and Scala for data processing. Experience with SQL for handling large datasets. Knowledge of data formats such as Iceberg, Parquet, ORC, and Delta Lake. Understanding of cloud infrastructure and architecture principles, especially within Azure. Leadership & Soft Skills: Proven ability to lead and mentor a team of data engineers. Excellent communication and interpersonal skills. Strong organizational skills with the ability to manage multiple tasks and priorities. Ability to work in a fast-paced, constantly evolving environment. Strong problem-solving, analytical, and troubleshooting abilities. Ability to collaborate effectively with cross-functional teams. Competencies: System Requirements Engineering: Uses appropriate methods to translate stakeholder needs into verifiable requirements. Collaborates: Builds partnerships and works collaboratively to meet shared objectives. Communicates Effectively: Delivers clear, multi-mode communications tailored to different audiences. Customer Focus: Builds strong customer relationships and delivers customer-centric solutions. Decision Quality: Makes good and timely decisions to keep the organization moving forward. Data Extraction: Performs ETL activities and transforms data for consumption by downstream applications. Programming: Writes and tests computer code, version control, and build automation. Quality Assurance Metrics: Uses measurement science to assess solution effectiveness. Solution Documentation: Documents information for improved productivity and knowledge transfer. Solution Validation Testing: Ensures solutions meet design and customer requirements. Data Quality: Identifies, understands, and corrects data flaws. Problem Solving: Uses systematic analysis to address and resolve issues. Values Differences: Recognizes the value that diverse perspectives bring to an organization. Preferred Knowledge & Experience: Exposure to Big Data open-source technologies (Spark, Scala/Java, Map-Reduce, Hive, HBase, Kafka, etc.). Experience with SQL and working with large datasets. Clustered compute cloud-based implementation experience. Familiarity with developing applications requiring large file movement in a cloud-based environment. Exposure to Agile software development and analytical solutions. Exposure to IoT technology. Additional Responsibilities Unique to this Position Qualifications: Education: Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field. Experience: 3 to 5 years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks, Apache Spark, Python/Scala, CI/CD, Snowflake, and Qlik for data processing. Experience working with multiple file formats like Parquet, Delta, and Iceberg. Knowledge of Kafka or similar streaming technologies. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments. Deep understanding of Azure Data Services. Experience with CI/CD pipelines, version control (Git), Jenkins, and agile methodologies. Familiarity with data lakes, data warehouses, and modern data architectures. Experience with Qlik Replicate (optional).
Posted 2 weeks ago
8.0 - 12.0 years
0 - 1 Lacs
Hyderabad, Ahmedabad, Bengaluru
Hybrid
Contractual (Project-Based) Notice Period: Immediate - 15 Days Fill this form: https://forms.office.com/Pages/ResponsePage.aspx?id=hLjynUM4c0C8vhY4bzh6ZJ5WkWrYFoFOu2ZF3Vr0DXVUQlpCTURUVlJNS0c1VUlPNEI3UVlZUFZMMC4u Resume- shweta.soni@panthsoftech.com
Posted 2 weeks ago
5.0 - 10.0 years
7 - 14 Lacs
Pune
Work from Office
We are looking for a skilled Data Engineer with 5-10 years of experience to join our team in Pune. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems and databases. Ensure data quality, integrity, and security. Optimize data processing and analysis workflows. Participate in code reviews and contribute to improving overall code quality. Job Requirements Strong proficiency in programming languages such as Python or Java. Experience with big data technologies like Hadoop or Spark. Knowledge of database management systems like MySQL or NoSQL. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills. Notice period: Immediate joiners preferred.
Posted 2 weeks ago
5.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Work from Office
We are looking for a skilled Senior Data Engineer with 5-8 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement large-scale data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data systems and databases. Ensure data quality, integrity, and security. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve technical issues related to data engineering. Job Requirements Strong knowledge of data engineering principles and practices. Experience with data modeling, database design, and data warehousing. Proficiency in programming languages such as Python, Java, or C++. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 2 weeks ago
6.0 - 10.0 years
8 - 18 Lacs
Chennai
Work from Office
Role Overview Are you passionate about building scalable data systems and working on cutting-edge cloud technologies? We're looking for a Senior Data Engineer to join our team and play a key role in transforming raw data into powerful insights. What Youll Do: Design, develop, and optimize scalable ETL/ELT pipelines and data integration workflows Build and maintain data lakes, warehouses, and real-time streaming pipelines Work with both structured and unstructured data, ensuring clean, usable datasets for analytics & ML Collaborate with analytics, product, and engineering teams to implement robust data models Ensure best practices around data quality, governance, lineage, and security Code in Python, SQL, PySpark, and work on Databricks Operate in AWS environments using Redshift, Glue, S3 Continuously monitor and optimize pipeline performance Document workflows and contribute to engineering standards What We’re Looking For: Strong hands-on experience in modern data engineering tools & platforms Cloud-first mindset with expertise in AWS data stack Solid programming skills and a passion for building high-performance data systems Excellent communication & collaboration skills
Posted 2 weeks ago
12.0 - 18.0 years
50 - 65 Lacs
Bengaluru
Work from Office
Oversee the delivery of data engagements across a portfolio of client accounts, understanding their specific needs, goals & challenges Provide mentorship & guidance for the Architects, Project Managers, & technical teams for data engagements Required Candidate profile 12+ years of experience and should be hands on in Data Architecture and should be an expert in DataBricks or Azure Should be in data engineering leadership or management roles
Posted 2 weeks ago
5.0 - 8.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Key Responsibilities: •Design, develop, and optimize data models within the Celonis Execution Management System (EMS). •Extract, transform, and load (ETL) data from flat files and UDP into Celonis. • Work closely with business stakeholders and data analysts to understand data requirements and ensure accurate representation of business processes. •Develop and optimize PQL (Process Query Language) queries for process mining. • Collaborate with group data engineers, architects, and analysts to ensure high-quality data pipelines and scalable solutions. •Perform data validation, cleansing, and transformation to enhance data quality. •Monitor and troubleshoot data integration pipelines, ensuring performance and reliability. •Provide guidance and best practices for data modeling in Celonis. Qualifications & Skills: • 5+ years of experience in data engineering, data modeling, or related roles Proficiency in SQL, ETL processes, and database management (e.g.,PostgreSQL, Snowflake, BigQuery, or similar). •Experience working with large-scale datasets and optimizing data models for performance. •Data management experience that spans across the data lifecycle and critical functions (e.g., data profiling, data modeling, data engineering, data consumption product and services). • Strong problem-solving skills and ability to work in an agile, fast-paced environment. •Excellent communications and demonstrated hands-on experience communicating technical topics with non-technical audiences. • Ability to effectively collaborate and manage the timely completion of assigned activities while working in a highly virtual team environment. •Excellent collaboration skills to work with cross-functional teams Roles and Responsibilities Key Responsibilities: •Design, develop, and optimize data models within the Celonis Execution Management System (EMS). •Extract, transform, and load (ETL) data from flat files and UDP into Celonis. • Work closely with business stakeholders and data analysts to understand data requirements and ensure accurate representation of business processes. •Develop and optimize PQL (Process Query Language) queries for process mining. • Collaborate with group data engineers, architects, and analysts to ensure high-quality data pipelines and scalable solutions. •Perform data validation, cleansing, and transformation to enhance data quality. •Monitor and troubleshoot data integration pipelines, ensuring performance and reliability. •Provide guidance and best practices for data modeling in Celonis. Qualifications & Skills: • 5+ years of experience in data engineering, data modeling, or related roles Proficiency in SQL, ETL processes, and database management (e.g.,PostgreSQL, Snowflake, BigQuery, or similar). •Experience working with large-scale datasets and optimizing data models for performance. •Data management experience that spans across the data lifecycle and critical functions (e.g., data profiling, data modeling, data engineering, data consumption product and services). • Strong problem-solving skills and ability to work in an agile, fast-paced environment. •Excellent communications and demonstrated hands-on experience communicating technical topics with non-technical audiences. • Ability to effectively collaborate and manage the timely completion of assigned activities while working in a highly virtual team environment. •Excellent collaboration skills to work with cross-functional teams
Posted 2 weeks ago
11.0 - 13.0 years
35 - 50 Lacs
Bengaluru
Work from Office
Principal AWS Data Engineer Location : Bangalore Experience : 9 - 12 years Job Summary: In this key leadership role, you will lead the development of foundational components for a Lakehouse architecture on AWS and drive the migration of existing data processing workflows to the new Lakehouse solution. You will work across the Data Engineering organisation to design and implement scalable data infrastructure and processes using technologies such as Python, PySpark, EMR Serverless, Iceberg, Glue and Glue Data Catalog. The main goal of this position is to ensure successful migration and establish robust data quality governance across the new platform, enabling reliable and efficient data processing. Success in this role requires deep technical expertise, exceptional problem-solving skills, and the ability to lead and mentor within an agile team. Must Have Tech Skills: Prior Principal Engineer experience, leading team best practices in design, development, and implementation, mentoring team members, and fostering a culture of continuous learning and innovation Extensive experience in software architecture and solution design, including microservices, distributed systems, and cloud-native architectures. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Deep technical knowledge of AWS data services and engineering practices, with demonstrable experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience of delivering Lakehouse solutions/architectures Nice To Have Tech Skills: Knowledge of additional programming languages and development tools to provide flexibility and adaptability across varied data engineering projects A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Lead complex projects autonomously, fostering an inclusive and open culture within development teams. Mentor team members and lead technical discussions. Provides strategic guidance on best practices in design, development, and implementation. Leads the development of high-quality, efficient code and develops necessary tools and applications to address complex business needs Collaborates closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading the design and planning of these components. Drive the migration of existing data processing workflows to a Lakehouse architecture, leveraging Iceberg capabilities. Serves as an internal subject matter expert in software development, advising stakeholders on best practices in design, development, and implementation Key Skills: Deep technical knowledge of data engineering solutions and practices. Expert in AWS services and cloud solutions, particularly as they pertain to data engineering practices Extensive experience in software architecture and solution design Specialized expertise in Python and Spark Ability to provide technical direction, set high standards for code quality and optimize performance in data-intensive environments. Skilled in leveraging automation tools and Continuous Integration/Continuous Deployment (CI/CD) pipelines to streamline development, testing, and deployment. Exceptional communicator who can translate complex technical concepts for diverse stakeholders, including engineers, product managers, and senior executives. Provides thought leadership within the engineering team, setting high standards for quality, efficiency, and collaboration. Experienced in mentoring engineers, guiding them in advanced coding practices, architecture, and strategic problem-solving to enhance team capabilities. Educational Background: Bachelor’s degree in computer science, Software Engineering, or a related field is essential. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 17 Lacs
Bengaluru
Work from Office
About this role: Wells Fargo is seeking a Lead Software Engineer (Lead Data Engineer). In this role, you will: Lead complex technology initiatives including those that are companywide with broad impact Act as a key participant in developing standards and companywide best practices for engineering complex and large scale technology solutions for technology engineering disciplines Design, code, test, debug, and document for projects and programs Review and analyze complex, large-scale technology solutions for tactical and strategic business objectives, enterprise technological environment, and technical challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented technical factors Make decisions in developing standard and companywide best practices for engineering and technology solutions requiring understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals Lead projects, teams, or serve as a peer mentor Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 5+ Years of experience in Data Engineering. 5+ years of overall experience of software development. 5+ years of Python development experience must include 3+ years in Spark framework. 5+ years of Oracle or SQL Server experience in designing, coding and delivering database applications Expert knowledge and considerable development experience with at least two or more of the following : Kafka ,ETL, Big Data, NoSql database, S3 or other object store . Strong understanding of data flows design and how to implement your designs in python Experience in writing and debugging complex PL/SQL or TSQL Stored Procedures Excellent troubleshooting and debugging skills Analyze a feature story and design a robust solution for it and create specs for complex business rules and calculations Ability to understand business problems and articulate a corresponding solution Excellent verbal, written, and interpersonal communication skills Job Expectations: Strong knowledge and understanding of Dremio framework Database query design and optimization Strong experience using the development ecosystem of applications (JIRA, ALM, GitHub, uDeploy(Urban Code Deploy), Jenkins, Artifactory, SVN, etc) Knowledge and understanding of multiple source code version control systems in working with branches, tags and labels
Posted 2 weeks ago
6.0 - 7.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary: Experience : 5 - 8 Years Location : Bangalore Contribute to building state-of-the-art data platforms in AWS, leveraging Python and Spark. Be part of a dynamic team, building data solutions in a supportive and hybrid work environment. This role is ideal for an experienced data engineer looking to step into a leadership position while remaining hands-on with cutting-edge technologies. You will design, implement, and optimize ETL workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires technical expertise, strong problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable experience as a senior data engineer. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience with data services in Lakehouse architecture. Good background and proven experience of data modelling for data platforms Nice To Have Tech Skills: A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Provides guidance on best practices in design, development, and implementation, ensuring solutions meet business requirements and technical standards. Works closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading design and planning of these components. Drive the migration of existing data processing workflows to the Lakehouse architecture, leveraging Iceberg capabilities. Communicates complex technical information clearly, tailoring messages to the appropriate audience to ensure alignment. Key Skills: Deep technical knowledge of data engineering solutions and practices. Implementation of data pipelines using AWS data services and Lakehouse capabilities. Highly proficient in Python, Spark and familiar with a variety of development technologies. Skilled in decomposing solutions into components (Epics, stories) to streamline development. Proficient in creating clear, comprehensive documentation. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Previous Financial Services experience delivering data solutions against financial and market reference data. Solid grasp of Data Governance and Data Management concepts, including metadata management, master data management, and data quality. Educational Background: Bachelor’s degree in computer science, Software Engineering, or related field essential. Bonus Skills: A working knowledge of Indices, Index construction and Asset Management principles.
Posted 2 weeks ago
8.0 - 10.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary: Experience : 4 - 8 years Location : Bangalore The Data Engineer will contribute to building state-of-the-art data Lakehouse platforms in AWS, leveraging Python and Spark. You will be part of a dynamic team, building innovative and scalable data solutions in a supportive and hybrid work environment. You will design, implement, and optimize workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires previous experience of building data products using AWS services, familiarity with Python and Spark, problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable previous experience as a data engineer. Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Nice To Have Tech Skills: Familiar with data services in a Lakehouse architecture. Familiar with technical design practices, allowing for the creation of scalable, reliable data products that meet both technical and business requirements A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Writes high quality code, ensuring solutions meet business requirements and technical standards. Works with architects, Product Owners, and Development leads to decompose solutions into Epics, assisting the design and planning of these components. Creates clear, comprehensive technical documentation that supports knowledge sharing and compliance. Experience in decomposing solutions into components (Epics, stories) to streamline development. Actively contributes to technical discussions, supporting a culture of continuous learning and innovation. Key Skills: Proficient in Python and familiar with a variety of development technologies. Previous experience of implementing data pipelines, including use of ETL tools to streamline data ingestion, transformation, and loading. Solid understanding of AWS services and cloud solutions, particularly as they pertain to data engineering practices. Familiar with AWS solutions including IAM, Step Functions, Glue, Lambda, RDS, SQS, API Gateway, Athena. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Experienced in Agile development, including sprint planning, reviews, and retrospectives Educational Background: Bachelor’s degree in computer science, Software Engineering, or related essential. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices. Familiar with implementing and optimizing CI/CD pipelines. Understands the processes that enable rapid, reliable releases, minimizing manual effort and supporting agile development cycles.
Posted 2 weeks ago
5.0 - 10.0 years
25 - 35 Lacs
Bengaluru
Work from Office
We are one of Australias leading integrated media companies, with major operations in broadcast television, publishing, and digital content. They own Channel 7, The West Australian newspaper, and 7plus, a streaming platform. Their portfolio includes partnerships with leading global media brands, reaching millions of Australians across various media channels. Role - Senior Data Engineer Responsibilities : Data Acquisition : Proactively design and implement processes for acquiring data from both internal systems and external data providers. Understand the various data types involved in the data lifecycle, including raw, curated, and lake data, to ensure effective data integration. SQL Development : Develop advanced SQL queries within database frameworks to produce semantic data layers that facilitate accurate reporting. This includes optimizing queries for performance and ensuring data quality. Linux Command Line : Utilize Linux command-line tools and functions, such as bash shell scripts, cron jobs, grep, and awk, to perform data processing tasks efficiently. This involves automating workflows and managing data pipelines. Data Protection : Ensure compliance with data protection and privacy requirements, including regulations like GDPR. This includes implementing best practices for data handling and maintaining the confidentiality of sensitive information. Documentation : Create and maintain clear documentation of designs and workflows using tools like Confluence and Visio. This ensures that stakeholders can easily communicate and understand technical specifications. API Integration and Data Formats : Collaborate with RESTful APIs and AWS services (such as S3, Glue, and Lambda) to facilitate seamless data integration and automation. Demonstrate proficiency in parsing and working with various data formats, including CSV and Parquet, to support diverse data processing needs. Key Requirements: 5+ years of experience as a Data Engineer , focusing on ETL development. 3+ years of experience in SQL and writing complex queries for data retrieval and manipulation. 3+ years of experience in Linux command-line and bash scripting. Familiarity with data modelling in analytical databases. Strong understanding of backend data structures, with experience collaborating with data engineers ( Teradata, Databricks, AWS S3 parquet/CSV ). Experience with RESTful APIs and AWS services like S3, Glue, and Lambda Experience using Confluence for tracking documentation. Strong communication and collaboration skills, with the ability to interact effectively with stakeholders at all levels. Ability to work independently and manage multiple tasks and priorities in a dynamic environment. Bachelors degree in Computer Science, Engineering, Information Technology, or a related field. Good to Have: Experience with Spark and Databricks. Understanding of data visualization tools, particularly Tableau. Knowledge of data clean room techniques and integration methodologies.
Posted 2 weeks ago
9.0 - 13.0 years
13 - 18 Lacs
Hyderabad
Work from Office
This role involves the development and application of engineering practice and knowledge in defining, configuring and deploying industrial digital technologies including but not limited to PLM MES for managing continuity of information across the engineering enterprise, including design, industrialization, manufacturing supply chain, and for managing the manufacturing data. - Grade Specific Focus on Digital Continuity Manufacturing. Fully competent in own area. Acts as a key contributor in a more complex critical environment. Proactively acts to understand and anticipates client needs. Manages costs and profitability for a work area. Manages own agenda to meet agreed targets. Develop plans for projects in own area. Looks beyond the immediate problem to the wider implications. Acts as a facilitator, coach and moves teams forward.
Posted 2 weeks ago
3.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
At Allstate, great things happen when our people work together to protect families and their belongings from lifes uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. This role is responsible for executing multiple tracks of work to deliver Big Data solutions enabling advanced data science and analytics. This includes working with the team on new Big Data systems for analyzing data; the coding & development of advanced analytics solutions to make/optimize business decisions and processes; integrating new tools to improve descriptive, predictive, and prescriptive analytics. This role contributes to the structured and unstructured Big Data / Data Science tools of Allstate from traditional to emerging analytics technologies and methods. The role is responsible for assisting in the selection and development of other team members. Key Responsibilities Participate in the development of moderately complex and occasionally complex technical solutions using Big Data techniques in data & analytics processes Develops innovative solutions within the team Participates in the development of moderately complex and occasionally complex prototypes and department applications that integrate Big Data and advanced analytics to make business decisions Uses new areas of Big Data technologies, (ingestion, processing, distribution) and research delivery methods that can solve business problems Understands the Big Data related problems and requirements to identify the correct technical approach Takes coaching from key team members to ensure efforts within owned tracks of work will meet their needs Executes moderately complex and occasionally complex functional work tracks for the team Partners with Allstate Technology teams on Big Data efforts Partners closely with team members on Big Data solutions for our data science community and analytic users Leverages and uses Big Data best practices / lessons learned to develop technical solutions Education 4 year Bachelors Degree (Preferred) Experience 2 or more years of experience (Preferred) Supervisory Responsibilities This job does not have supervisory duties. Education & Experience (in lieu) In lieu of the above education requirements, an equivalent combination of education and experience may be considered. Primary Skills Big Data Engineering, Big Data Systems, Big Data Technologies, Data Science, Influencing Others Shift Time Recruiter Info Annapurna Jhaajhat@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporations Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organizations business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India here.
Posted 2 weeks ago
4.0 - 7.0 years
10 - 15 Lacs
Bengaluru
Work from Office
At Allstate, great things happen when our people work together to protect families and their belongings from lifes uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. This role is responsible for driving multiple complex tracks of work to deliver Big Data solutions enabling advanced data science and analytics. This includes working with the team on new Big Data systems for analyzing data; the coding & development of advanced analytics solutions to make/optimize business decisions and processes; integrating new tools to improve descriptive, predictive, and prescriptive analytics; and discovery of new technical challenges that can be solved with existing and emerging Big Data hardware and software solutions. This role contributes to the structured and unstructured Big Data / Data Science tools of Allstate from traditional to emerging analytics technologies and methods. The role is responsible for assisting in the selection and development of other team members. Skills Primarily Scala & Spark: Strong in functional programming and big data processing using Spark... Java Proficient in Java 8+, REST API development, multithreading, and OOP concepts.Good Hands-on with MongoDB CAAS Experience with Docker, Kubernetes, and deploying containerized apps. Tools: Git, CI/CD, JSON, SBT/Maven, Agile methodologies. Key Responsibilities Uses new areas of Big Data technologies, (ingestion, processing, distribution) and research delivery methods that can solve business problems Participates in the development of complex prototypes and department applications that integrate Big Data and advanced analytics to make business decisions Supports Innovation; regularly provides new ideas to help people, process, and technology that interact with analytic ecosystem Participates in the development of complex technical solutions using Big Data techniques in data & analytics processes Influence within the team on the effectiveness of Big Data systems to solve their business problems. Leverages and uses Big Data best practices / lessons learned to develop technical solutions used for descriptive analytics, ETL, predictive modeling, and prescriptive "real time decisions" analytics Partners closely with team members on Big Data solutions for our data science community and analytic users Partners with Allstate Technology teams on Big Data efforts Education Masters Degree (Preferred) Experience 6 or more years of experience (Preferred) Primary Skills Apache Spark, Big Data, Big Data Engineering, Big Data Systems, Big Data Technologies, CasaXPS, CI/CD, Data Science, Docker (Software), Git, Influencing Others, Java, MongoDB, Multithreading, RESTful APIs, Scala (Programming Language), ScalaTest, Spring Boot Shift Time Recruiter Info rkotz@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporations Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organizations business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India here.
Posted 2 weeks ago
4.0 - 9.0 years
18 - 22 Lacs
Bengaluru
Work from Office
About us As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up . Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Team overview Target Global Supply Chain and Logistics (GSCL) is evolving at an incredible pace. We are constantly reimagining how we get the right product to the guest even better, faster and more cost effectively than before. We are becoming more intelligent, automated and algorithmic in our decision-making, so that no matter how guests shopin stores or on Target.com we deliver the convenience and immediate gratification they demand and deserve. Operational Intelligence Analytics , within Targets Supply Chain, is responsible for identifying data and empowering users with insight to improve operational performance. The skills mix is a blend of data engineering, data science, and diverse problem-solving capabilities jack of several trades, master of none. The team currently uses a wide variety of analytics tools including SQL, Python, R and visualization tools to work with small, sparse datasets as well as big data platforms like Hadoop. Role overview This role will support Data & Analytics for Sales & Operational Planning (S&OP). As a Sr Product Manager, you will work in the product model and will partner to develop a comprehensive product strategy, related roadmap, and set key business objectives (OKRs) for your respective product. You will need to leverage the knowledge of your product, as well as, customer feedback and establish other relevant data points to assess value, develop business cases, and prioritize the direction and desired outcomes for your product. You will lead a product and work in unison with data analysts, engineers, data scientists and business partners to deliver a product. You will be the voice of the product to key stakeholders to ensure that their needs are met and that the product team is getting the direction and support that it needs to be successful. You will develop and actively understand the market, own a product roadmap, and backlog outlining the customer themes, epics, and stories while prioritizing the backlog to focus on the highest impact work for your team and stakeholders. You will encourage the open exchange of information and viewpoints, as well as inspire others to achieve challenging goals and high standards of performance while committing to the organization's direction. You will foster a sense of urgency to achieve goals and leverage resources to overcome unexpected obstacles, and partner with product teams across the organization to help them achieve their goals while pursuing and completing yours. Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. 4-year college degree (or equivalent experience) 8 + total experience & 6 + years of Product Management experience , or experience within S&OP /Supply Chain Strong communication skills, building trusted relationships with stakeholders, influencing teams across the organization, managing conflicts, and adapting to a fast-moving environment Skilled in Excel, Greenfield, Smartsheet, Confluence, Jira, and Data@Target Experience with analytics and ability to facilitate communication between business and technical teams Hands on e xperience working in an agile environment and driving team operating model improvements ( e.g. leading ceremonies, user stories, iterative development, scrum teams, sprints, personas) Experience working with Global teams and openness to meetings in the evenings post 8pm IST as well Proven ability in leveraging problem solving frameworks Proven ability to lead a body of work with cross-functional partners, specifically Data Engineering, Data Science, Product, and Business Owners Proven ability to manage a large list of priorities and provide transparency to stakeholders on trade off decisions and expected time of completion Useful Links: Life at Targethttps://india.target.com/ Benefitshttps://india.target.com/life-at-target/workplace/benefits Culture https://india.target.com/life-at-target/belonging
Posted 2 weeks ago
3.0 - 8.0 years
4 - 8 Lacs
Kolkata
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will analyze, design, code, and test multiple components of application code across one or more clients. You will perform maintenance, enhancements, and/or development work, contributing to the growth and success of the projects. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Collaborate with team members to analyze, design, and develop software solutions.- Participate in code reviews and provide constructive feedback.- Troubleshoot and debug software applications to ensure optimal performance.- Research and implement new technologies to enhance existing software.- Document software specifications, user manuals, and technical documentation. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data engineering concepts and best practices.- Experience with cloud platforms such as AWS or Azure.- Hands-on experience with big data technologies like Hadoop or Spark.- Knowledge of programming languages such as Python, Java, or Scala. Additional Information:- The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Kolkata office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About the Role: Grade Level (for internal use): 09 Data Scientist Location: Bengaluru The Team: The Automotive Insights - Supply Chain and Technology and IMR department at S&P Global is dedicated to delivering critical intelligence and comprehensive analysis of the automotive industry's supply chain and technology. Our team provides actionable insights and data-driven solutions that empower clients to navigate the complexities of the automotive ecosystem, from manufacturing and logistics to technological innovations and market dynamics. We collaborate closely with industry stakeholders to ensure our research supports strategic decision-making and drives growth within the automotive sector. Join us to be at the forefront of transforming the automotive landscape with cutting-edge insights and expertise. Responsibilities: Develop and implement advanced analytical models to support S&P Global Mobility's SCT and IMR business strategies. Lead projects leveraging generative AI and intelligent agents to enhance predictive capabilities and automate complex processes. Collaborate with cross-functional teams to generate data-driven insights and solutions for the automotive sectors. Design and optimize data pipelines for efficient data processing and integration across various platforms. Mentor junior data scientists, providing guidance on best practices in data analysis and model development. Required Qualifications: Bachelors or masters degree in computer science, Analytics, Data Science, Statistics, or a related field. 3 to 6 years of experience in building Machine learning or Generative AI applications Proven experience with data analysis tools such as Python, R, or SQL. Strong understanding of machine learning algorithms and frameworks. Expertise in AWS cloud services, including data storage, processing, and machine learning solutions. Experience in deploying and managing machine learning models on AWS. Familiarity with MLOps practices and tools for model lifecycle management. Knowledge of generative AI techniques and applications. Understanding of data engineering and data pipeline optimization. Soft Skills: Excellent communication skills for effective collaboration with cross-functional teams. Strong problem-solving abilities with a proactive approach to challenges. Ability to work independently and manage multiple projects simultaneously. Adaptability to rapidly changing environments and emerging technologies. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- , SWP Priority Ratings - (Strategic Workforce Planning)
Posted 2 weeks ago
4.0 - 7.0 years
3 - 7 Lacs
Noida
Work from Office
R1 is a leading provider of technology-driven solutions that help hospitals and health systems to manage their financial systems and improve patients experience. We are the one company that combines the deep expertise of a global workforce of revenue cycle professionals with the industry's most advanced technology platform, encompassing sophisticated analytics, Al, intelligent automation and workflow orchestration. R1 is a place where we think boldly to create opportunities for everyone to innovate and grow. A place where we partner with purpose through transparency and inclusion. We are a global community of engineers, front-line associates, healthcare operators, and RCM experts that work together to go beyond for all those we serve. Because we know that all this adds up to something more, a place where we're all together better. R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare work better for all by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 16,000+ strong in India with presence in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Description: We are seeking a Software Engineer with 4-7 year of experience to join our ETL Development team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company. Qualifications: Deep knowledge and experience working with SSIS, T-SQL, Azure Databricks, Azure Data Lake, Azure Data Factory,. Experienced in writing SQL objects SP, UDF, Views Experienced in data modeling. Experience working with MS-SQL and NoSQL database systems such as Apache Parquet. Experience in Scala, SparkSQL, Airflow is preferred. Experience with acquiring and preparing data from primary and secondary disparate data sources Experience working on large scale data product implementation. Experience working with agile methodology preferred. Healthcare industry experience preferred. Responsibilities: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions Work with other team with deep experience in ETL process and data science domains to understand how to centralize their data Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook
Posted 2 weeks ago
3.0 - 5.0 years
3 - 7 Lacs
Noida
Work from Office
R1 is a leading provider of technology-driven solutions that help hospitals and health systems to manage their financial systems and improve patients experience. We are the one company that combines the deep expertise of a global workforce of revenue cycle professionals with the industry's most advanced technology platform, encompassing sophisticated analytics, Al, intelligent automation and workflow orchestration. R1 is a place where we think boldly to create opportunities for everyone to innovate and grow. A place where we partner with purpose through transparency and inclusion. We are a global community of engineers, front-line associates, healthcare operators, and RCM experts that work together to go beyond for all those we serve. Because we know that all this adds up to something more, a place where we're all together better. R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare work better for all by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 16,000+ strong in India with presence in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. R1 RCM Inc. is a leading provider of technology-enabled revenue cycle management services which transform and solve challenges across health systems, hospitals and physician practices. Headquartered in Chicago, R1 is a publicly -traded organization with employees throughout the US and multiple INDIA locations.Our mission is to be the one trusted partner to manage revenue, so providers and patients can focus on what matters most. Our priority is to always do what is best for our clients, patients and each other. With our proven and scalable operating model, we complement a healthcare organizations infrastructure, quickly driving sustainable improvements to net patient revenue and cash flows while reducing operating costs and enhancing the patient experience. Description: We are seeking a Data Engineer with 3-5 year of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company. Qualifications: Deep knowledge and experience working with Python/Scala and Spark Experienced in Azure data factory, Azure Data bricks, Azure Blob Storage, Azure Data Lake, Delta lake. Experience working on Unity Catalog, Apache Parquet Experience with Azure cloud environments Experience with acquiring and preparing data from primary and secondary disparate data sources Experience working on large scale data product implementation, responsible for technical delivery. Experience working with agile methodology preferred Healthcare industry experience preferredResponsibilitiesCollaborate with and across Agile teams to design, develop, test, implement, and support technical solutions Work with other team with deep experience in ETL process, distributed microservices, and data science domains to understand how to centralize their data Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems.Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook
Posted 2 weeks ago
10.0 - 15.0 years
7 - 11 Lacs
Noida
Work from Office
R1 RCM India is proud to be recognized amongst India's Top 50 Best Companies to Work For TM 2023 by Great Place To Work Institute. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare simpler and enable efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 14,000 strong in India with offices in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Description : We are seeking a highly skilled and motivated Data Cloud Architect to join our Product and technology team. As a Data Cloud Architect, you will play a key role in designing and implementing our cloud-based data architecture, ensuring scalability, reliability, and optimal performance for our data-intensive applications. Your expertise in cloud technologies, data architecture, and data engineering will drive the success of our data initiatives. Responsibilities: Collaborate with cross-functional teams, including data engineers, data leads, product owner and stakeholders, to understand business requirements and data needs. Design and implement end-to-end data solutions on cloud platforms, ensuring high availability, scalability, and security. Architect delta lakes, data lake, data warehouses, and streaming data solutions in the cloud. Evaluate and select appropriate cloud services and technologies to support data storage, processing, and analytics. Develop and maintain cloud-based data architecture patterns and best practices. Design and optimize data pipelines, ETL processes, and data integration workflows. Implement data security and privacy measures in compliance with industry standards. Collaborate with DevOps teams to deploy and manage data-related infrastructure on the cloud. Stay up-to-date with emerging cloud technologies and trends to ensure the organization remains at the forefront of data capabilities. Provide technical leadership and mentorship to data engineering teams. Qualifications: Bachelors degree in computer science, Engineering, or a related field (or equivalent experience). 10 years of experience as a Data Architect, Cloud Architect, or in a similar role. Expertise in cloud platforms such as Azure. Strong understanding of data architecture concepts and best practices. Proficiency in data modeling, ETL processes, and data integration techniques. Experience with big data technologies and frameworks (e.g., Hadoop, Spark). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Familiarity with data warehousing solutions (e.g., Redshift, Snowflake). Strong knowledge of security practices for data in the cloud. Excellent problem-solving and troubleshooting skills. Effective communication and collaboration skills. Ability to lead and mentor technical teams. Additional Preferred Qualifications: Bachelors degree / Master's degree in Data Science, Computer Science, or related field. Relevant cloud certifications (e.g., Azure Solutions Architect) and data-related certifications. Experience with real-time data streaming technologies (e.g., Apache Kafka). Knowledge of machine learning and AI concepts in relation to cloud-based data solutions. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook
Posted 2 weeks ago
4.0 - 7.0 years
3 - 7 Lacs
Noida
Work from Office
Role Objective: R1 RCM Inc. is a leading provider of technology-enabled revenue cycle management services which transform and solve challenges across health systems, hospitals and physician practices. Headquartered in Chicago, R1 is a publicly-traded organization with employees throughout the US and multiple INDIA locations. Our mission is to be the one trusted partner to manage revenue, so providers and patients can focus on what matters most. Our priority is to always do what is best for our clients, patients and each other. With our proven and scalable operating model, we complement a healthcare organizations infrastructure, quickly driving sustainable improvements to net patient revenue and cash flows while reducing operating costs and enhancing the patient experience. Description: We are seeking a Software Data Engineer with 4-7 year of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company. Qualifications: Deep knowledge and experience working with Python/Scala and Apache Spark Experienced in Azure data factory, Azure Data bricks, Azure Blob Storage, Azure Data Lake, Delta lake. Experienced in orchestration tool Apache Airflow . Experience working with SQL and NoSQL database systems such as MongoDB, Apache Parquet Experience with Azure cloud environments Experience with acquiring and preparing data from primary and secondary disparate data sources Experience working on large scale data product implementation. Experience working with agile methodology preferred. Healthcare industry experience preferred. Responsibilities: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions Work with other team with deep experience in ETL process and data science domains to understand how to centralize their data Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook
Posted 2 weeks ago
4.0 - 6.0 years
3 - 7 Lacs
Noida
Work from Office
R1 RCM India is proud to be recognized amongst India's Top 50 Best Companies to Work For TM 2023 by Great Place To Work Institute. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare simpler and enable efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 14,000 strong in India with offices in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. We are seeking a Data Engineer with 4-6 years of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company. Deep knowledge and experience working with Scala and Spark. Experienced in Azure data factory, Azure Data bricks, Azure Synapse Analytics, Azure Data Lake. Experience working in Full stack development in .Net & Angular. Experience working with SQL and NoSQL database systems such as MongoDB, Couchbase. Experience in distributed system architecture design. Experience with cloud environments (Azure Preferred). Experience with acquiring and preparing data from primary and secondary disparate data sources (real-time preferred). Experience working on large scale data product implementation, responsible for technical delivery, mentoring and managing peer engineers. Experience working with Databricks is preferred. Experience working with agile methodology is preferred. Healthcare industry experience is preferred. Job Responsibilities Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions. Work with other team with deep experience in ETL process, distributed microservices, and data science domains to understand how to centralize their data. Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook
Posted 2 weeks ago
9.0 - 12.0 years
11 - 14 Lacs
Noida
Work from Office
R1 RCM India is proud to be recognized amongst India's Top 50 Best Companies to Work For TM 2023 by Great Place To Work Institute. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare simpler and enable efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 14,000 strong in India with offices in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. JD Lead Data Engineer R1 RCM Inc. is a leading provider of technology-enabled revenue cycle management services which transform and solve challenges across health systems, hospitals and physician practices. Headquartered in Chicago, R1 is a publicly-traded organization with employees throughout the US and multiple INDIA locations. Our mission is to be the one trusted partner to manage revenue, so providers and patients can focus on what matters most. Our priority is to always do what is best for our clients, patients and each other. With our proven and scalable operating model, we complement a healthcare organizations infrastructure, quickly driving sustainable improvements to net patient revenue and cash flows while reducing operating costs and enhancing the patient experience. Description: We are seeking a Lead Data Engineer with 9-12 year of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company. Qualifications: Deep knowledge and experience working with Python/Scala and Spark Experienced in Azure data factory, Azure Data bricks, Azure Data Lake, Blob Storage, Delta Lake, Airflow. Experience working with SQL and NoSQL database systems such as MongoDB, Apache Parquet Experience in distributed system architecture design Experience with AZURE cloud environments Experience with acquiring and preparing data from primary and secondary disparate data sources. Experience working on large scale data product implementation, responsible for technical delivery and mentoring peer engineers. Knowledge of Azure DevOps and version control systems, ideally Git Experience working with agile methodology preferred Excellent understanding of OOPs and design patterns Healthcare industry experience preferred Excellent communication and presentation skills. Responsibilities: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions Work with other team with deep experience in ETL process, distributed microservices, and data science domains to understand how to centralize their data Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook
Posted 2 weeks ago
3.0 - 7.0 years
5 - 9 Lacs
Hyderabad, Pune
Work from Office
Snowflake data engineer1 As mentioned earlier, for this role, were looking for a candidate who has a strong background in data technologies such as SQL Server, Snowflake, and similar platforms. In addition to that, they should bring experience in at least one other programming language, with a proficiency in Python being a key requirement.The ideal candidate should also have Exposure to DevOps pipelines within a data engineering context At least a high-level understanding of AWS services and how they fit into modern data architectures A proactive mindsetsomeone who is motivated to take initiative and contribute beyond assigned tasks
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
27534 Jobs | Dublin
Wipro
14175 Jobs | Bengaluru
Accenture in India
9809 Jobs | Dublin 2
EY
9787 Jobs | London
Amazon
7964 Jobs | Seattle,WA
Uplers
7749 Jobs | Ahmedabad
IBM
7414 Jobs | Armonk
Oracle
7069 Jobs | Redwood City
Muthoot FinCorp (MFL)
6164 Jobs | New Delhi
Capgemini
5421 Jobs | Paris,France