Home
Jobs

337 Emr Jobs - Page 2

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Minimum 6 years of hands-on experience in data engineering or big data development roles. Strong programming skills in Python and experience with Apache Spark (PySpark preferred). Proficient in writing and optimizing complex SQL queries. Hands-on experience with Apache Airflow for orchestration of data workflows. Deep understanding and practical experience with AWS services: Data Storage & ProcessingS3, Glue, EMR, Athena Compute & ExecutionLambda, Step Functions DatabasesRDS, DynamoDB MonitoringCloudWatch Experience with distributed data processing, parallel computing, and performance tuning. Strong analytical and problem-solving skills. Familiarity with CI/CD pipelines and DevOps practices is a plus.

Posted 1 day ago

Apply

8.0 - 13.0 years

5 - 10 Lacs

Pune

Work from Office

Naukri logo

Data Engineer Position Summary The Data Engineer is responsible for building and maintaining data pipelines ensuring the smooth operation of data systems and optimizing workflows to meet business requirements This role will support data integration and processing for various applications Minimum Qualifications 6 Years overall IT experience with minimum 4 years of work experience in below tech skills Tech Skills Proficient in Python scripting and PySpark for data processing tasks Strong SQL capabilities with hands on experience managing big data using ETL tools like Informatica Experience with the AWS cloud platform and its data services including S3 Redshift Lambda EMR Airflow Postgres SNS and EventBridge Skilled in BASH Shell scripting Understanding of data lakehouse architecture particularly with Iceberg format is a plus Preferred Experience with Kafka and Mulesoft API Understanding of healthcare data systems is a plus Experience in Agile methodologies Strong analytical and problem solving skills Effective communication and teamwork abilities Responsibilities Develop and maintain data pipelines and ETL processes to manage large scale datasets Collaborate to design test data architectures to align with business needs Implement and optimize data models for efficient querying and reporting Assist in the development and maintenance of data quality checks and monitoring processes Support the creation of data solutions that enable analytical capabilities Contribute to aligning data architecture with overall organizational solutions

Posted 1 day ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Experience: 8 years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data PipelinesDesign, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL DevelopmentDevelop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow AutomationBuild and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data IntegrationWork with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and ScalingOptimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.

Posted 1 day ago

Apply

10.0 - 15.0 years

30 - 35 Lacs

Noida

Work from Office

Naukri logo

We at Innovaccer are looking for a Director-Clinical Informatics and you need to have structured problem-solving skills, strong analytical abilities, willingness to take initiatives and drive them, excellent verbal and written communication skills, and high levels of empathy towards internal and external stakeholders, among other things.The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions.Data is our bread and butter for innovation. We are looking for a leader who will own and manage the clinical ontologies at Innovaccer. He/She will also help Innovaccer build clinical workflows, and care protocols to facilitate clinical decision support at the point of care. A Day in the Life Built a new product development pipeline aligning the companys portfolio with market insights across persona using clinical decision support in EHRs.Owned market research and built business cases to enable prioritization and build/buy/partner assessment by executive-level innovation governance. Worked successfully in a matrixed environment across business units to understand the big picture,build cross-functional relationships, and leverage content assets to solve customer (internal and external) problems. Worked on a pioneering FHIR-based, EHR-integrated, patient context specific, evidence-based guideline solution to reduce care variability. Solid understanding of clinical informatics standards (FHIR, CCDA,CDS Hooks, etc.) and terminologies (RxNorm, LOINC, SNOMED, etc.) Built a successful Clinical Quality Improvement program for assessing clinical credibility of Nuances NLP engines for clinical documentation quality. Created buy-in from executive leadership and cross-functional alignment among stakeholders from product, engineering, and the implementation/customer success teams. Owned the creation of analytics and quality metrics for provider and payor benchmarking and its monetization, for the speech recognition and revenue cycle products. Worked with the CMO, CMIOs, clinical documentation specialists, and the Product-Engineering team to productize them Lead development of clinical content for clinical decision support (CDS) to improve clinical documentation. Collaborate with clinical informaticists, data scientists,, clinical SMEs, product, and engineering teams to build CDS solutions with a deep understanding of the EHR workflow. Managing and defining clinical ontologies and implementing industry best practices of building value sets. The role involves client interaction during US hours, so you should be comfortable working in that time zone What You Need Advanced healthcare degree (MD, PharmD, RN, or Master's in Health Informatics) with 10+ years of clinical informatics experience and 5+ years in managerial/leadership roles Deep technical expertise in clinical informatics standards (FHIR, HL7, CCDA, CDS Hooks) and terminologies (SNOMED CT, LOINC, RxNorm) with hands-on EHR experience Proven track record of implementing clinical decision support systems, EHR integrations, and healthcare analytics platforms in complex healthcare environments Strong clinical knowledge with understanding of care delivery processes, evidence-based medicine, clinical workflows, and regulatory requirements (HIPAA, CMS programs)

Posted 1 day ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.

Posted 1 day ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.

Posted 1 day ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

Process Manager - AWS Data Engineer Mumbai/Pune| Full-time (FT) | Technology Services Shift Timings - EMEA(1pm-9pm)|Management Level - PM| Travel - NA The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Understand clients requirement and provide effective and efficient solution in AWS using Snowflake. Assembling large, complex sets of data that meet non-functional and functional business requirements Using Snowflake / Redshift Architect and design to create data pipeline and consolidate data on data lake and Data warehouse. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL Technical and Functional Skills: AWS ServicesStrong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc. Programming LanguagesProficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Data WarehousingExperience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. ETL ToolsFamiliarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Database ManagementKnowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Big Data TechnologiesUnderstanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Version ControlProficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Problem-solving Skills: Ability to analyze complex technical problems and propose effective solutions. Communication Skills: Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Education and ExperienceTypically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. About eClerx eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. About eClerx Technology eClerxs Technology Group collaboratively delivers Analytics, RPA, AI, and Machine Learning digital technologies that enable our consultants to help businesses thrive in a connected world. Our consultants and specialists partner with our global clients and colleagues to build and implement digital solutions through a broad spectrum of activities. To know more about us, visit https://eclerx.com eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law

Posted 1 day ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

Naukri logo

The candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. He/she must be able to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, he/she must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Part visual storyteller, part designer, part content creator, you will be responsible to create, enhance and support diverse and complex pre-sales and post sales threads and collateral development initiatives. Presentations/Collaterals for sales contexts/meetingsYou will be required to collaborate with strategists, subject matter experts and consultants to storyboard and create engaging and aesthetically intuitive presentations for various sales contexts and client meetings (pitches, workshops, point of views, response to request for proposals (RFPs), QBRs, SBRs etc). These presentations or collaterals are typically emailed /presented to CXO level and/or technical audiences in leading companies across the world. As part of this, you will occasionally pursue quick hits (with turnaround times as short as a day) and more frequently work on detailed work spanning a few days. Collaborate with peers and internal teams to source and create case studies, mock dashboards and sample deliverables to augment pitch decks and content readiness for upcoming pursuits. Multi-Format Sales & Marketing CollateralsStoryboard and create multi-format content/collaterals in the form of brochures, infographics, product sheets, sell sheets, banners, teasers, product demos and product videos. In addition, you will be required to support program-level initiatives such as newsletters and internal training programs. You will be responsible for organizing, managing and governing the steady stream of collaterals being produced and evangelized both internally and externally, in line with the processes defined by the team. Create and maintain a library of presentation templates for internal and external use Check and balance templates to ensure they are up-to-date and in line with company or client branding. Technical and Functional Skills Bachelors Degree with 5+ years of experience in presentation design and/or a creative visualizer role in sales and marketing contexts Strong knowledge and proficiency of presentation software such as PowerPoint (must-have), and Prezi (good-to-have) Proficiency with the Adobe Creative Suite (Aftereffects, Illustrator, InDesign) and a sound understanding of interoperability processes Proven talent in creative and visual thinking Excellent verbal and written communication skills Proven talent for transforming complex information into simple yet striking visualizations. An impressive portfolio (please share the link) that showcases what you would bring to this role

Posted 1 day ago

Apply

4.0 - 5.0 years

9 - 19 Lacs

Hyderabad

Work from Office

Naukri logo

Hi All , We have immediate openings for Below Requirement Role : Hadoop Administration Skill : Hadoop Administrator(with EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, Neo4j, AWS) Experience : 4 to 9yrs Work location : Hyderabad Interview Mode : 1sr round virtual & 2nd round F2F Notice Period : 15 to immediate joiners only Interested candidates can share your cv to Mail : sravani.vommi@sonata-software.com Contact : 7075751998 JD FOR Hadoop Admin: Hadoop Administrator(with EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, Neo4j, AWS) Job Summary: We are seeking a highly skilled Hadoop Administrator with hands-on experience managing distributed data platforms such as Hadoop EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, and Neo4j. Key Responsibilities: Cluster Management: Administer, manage, and maintain Hadoop EMR clusters, ensuring optimal performance, high availability, and resource utilization. Handle the provisioning, configuration, and scaling of Hadoop clusters, with a focus on EMR, ensuring seamless integration with other ecosystem tools (e.g., Spark, Kafka, HBase). Oversee HBase configurations, performance tuning, and integration within the Hadoop ecosystem. Manage OpenSearch(formerly known as Elasticsearch) for log analytics and large-scale search applications. Data Integration & Processing: Oversee the performance and optimization of Apache Spark workloads across distributed data environments. Design and manage efficient data pipelines between Snowflake, Kafka, and the Hadoop ecosystem, ensuring seamless data movement and transformation. Implement data storage solutions in Snowflake and manage seamless data transfers to/from Hadoop(EMR) and other environments. Cloud & AWS Services: Work closely with AWS services such as EC2, S3,ECS, Lambda, IAM, RDS, and CloudWatch to build scalable, cost-efficient solutions for data management and processing. manage AWS EMR clusters, ensuring they are optimized for big data workloads and integrated with other AWS services. - Security & Compliance: Manage and configure Kerberos authentication and access control mechanisms within the Hadoop ecosystem (HDFS, YARN, Spark) to ensure data security. Implement encryption and secure data transfer policies within Hadoop clusters, Kafka, HBase, and OpenSearch to meet compliance and regulatory requirements. Manage user roles and permissions for access to Snowflake and ensure seamless integration of security policies across platforms. Monitoring & Troubleshooting: Set up and manage monitoring solutions to ensure the health of the Hadoop ecosystem and related components. Actively monitor and troubleshoot issues with Spark, Kafka, HBase, OpenSearch, and other distributed systems. Provide proactive support to address performance issues, bottlenecks, and failures. Automation & Optimization: Automate the deployment, scaling, and management of Hadoop and other big data systems using scripting languages (Bash, Python) . Optimize the configurations and performance of EMR, Spark, Kafka, HBase, OpenSearch. Develop scripts and utilities for backup, job monitoring, and performance tuning.

Posted 1 day ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Mumbai

Work from Office

Naukri logo

Develop and maintain data-driven applications using Scala and PySpark. Work with large datasets, performing data analysis, building data pipelines, and optimizing performance.

Posted 2 days ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Develop and manage data pipelines using Snowflake. Optimize performance and data warehousing strategies.

Posted 2 days ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Job Summary: We are seeking a highly motivated and experienced Senior Data Engineer to join our team. This role requires a deep curiosity about our business and a passion for technology and innovation. You will be responsible for designing and developing robust, scalable data engineering solutions that drive our business intelligence and data-driven decision-making processes. If you thrive in a dynamic environment and have a strong desire to deliver top-notch data solutions, we want to hear from you. Key Responsibilities: Collaborate with agile teams to design and develop cutting-edge data engineering solutions. Build and maintain distributed, low-latency, and reliable data pipelines ensuring high availability and timely delivery of data. Design and implement optimized data engineering solutions for Big Data workloads to handle increasing data volumes and complexities. Develop high-performance real-time data ingestion solutions for streaming workloads. Adhere to best practices and established design patterns across all data engineering initiatives. Ensure code quality through elegant design, efficient coding, and performance optimization. Focus on data quality and consistency by implementing monitoring processes and systems. Produce detailed design and test documentation, including Data Flow Diagrams, Technical Design Specs, and Source to Target Mapping documents. Perform data analysis to troubleshoot and resolve data-related issues. Automate data engineering pipelines and data validation processes to eliminate manual interventions. Implement data security and privacy measures, including access controls, key management, and encryption techniques. Stay updated on technology trends, experimenting with new tools, and educating team members. Collaborate with analytics and business teams to improve data models and enhance data accessibility. Communicate effectively with both technical and non-technical stakeholders. Qualifications: Education: Bachelors degree in Computer Science, Computer Engineering, or a related field. Experience: Minimum of 5+ years in architecting, designing, and building data engineering solutions and data platforms. Proven experience in building Lakehouse or Data Warehouses on platforms like Databricks or Snowflake. Expertise in designing and building highly optimized batch/streaming data pipelines using Databricks. Proficiency with data acquisition and transformation tools such as Fivetran and DBT. Strong experience in building efficient data engineering pipelines using Python and PySpark. Experience with distributed data processing frameworks such as Apache Hadoop, Apache Spark, or Flink. Familiarity with real-time data stream processing using tools like Apache Kafka, Kinesis, or Spark Structured Streaming. Experience with various AWS services, including S3, EC2, EMR, Lambda, RDS, DynamoDB, Redshift, and Glue Catalog. Expertise in advanced SQL programming and performance tuning. Key Skills: Strong problem-solving abilities and perseverance in the face of ambiguity. Excellent emotional intelligence and interpersonal skills. Ability to build and maintain productive relationships with internal and external stakeholders. A self-starter mentality with a focus on growth and quick learning. Passion for operational products and creating outstanding employee experiences.

Posted 2 days ago

Apply

8.0 - 10.0 years

7 - 16 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Mandatory skills: AWS Python

Posted 2 days ago

Apply

6.0 - 11.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Experience in Cloud platform, e.g., AWS, GCP, Azure, etc. Experience in distributed technology tools, viz. SQL, Spark, Python, PySpark, Scala Performance Turing Optimize SQL, PySpark for performance Airflow workflow scheduling tool for creating data pipelines GitHub source control tool & experience with creating/ configuring Jenkins pipeline Experience in EMR/ EC2, Databricks etc. DWH tools incl. SQL database, Presto, and Snowflake Streaming, Serverless Architecture

Posted 2 days ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a AWS Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future ar Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience • 10+ years of experience in data engineering with a minimum of 6 years on AWS. • Proficiency in AWS data services, including S3, Redshift, DynamoDB, Glue, Lambda, and EMR. • Strong SQL skills and experience with NoSQL databases on AWS. • Programming skills in Python, Java, or Scala for data processing and ETL tasks. • Solid understanding of data warehousing concepts, data modeling, and ETL best practices. • Experience with machine learning model deployment on AWS SageMaker. • Familiarity with data orchestration tools, such as Apache Airflow, AWS Step Functions, or AWS Data Pipeline. • Excellent problem-solving and analytical skills with attention to detail. • Strong communication skills and ability to collaborate effectively with both technical and non-technical stakeholders. • Experience with advanced AWS analytics services such as Athena, Kinesis, QuickSight, and Elasticsearch. • Hands-on experience with Amazon Bedrock and generative AI tools for exploring and implementing AI-based solutions. • AWS Certifications, such as AWS Certified Big Data – Specialty, AWS Certified Machine Learning – Specialty, or AWS Certified Solutions Architect. • Familiarity with CI/CD pipelines, containerization (Docker), and serverless computing concepts on AWS. Preferred Skills and Experience •Experience working as a Data Engineer and/or in cloud modernization. •Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes. •Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. •Cloud platform certification, e.g., AWS Certified Data Analytics– Specialty, Elastic Certified Engineer, Google CloudProfessional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate. •Understanding of social coding and Integrated Development Environments, e.g., GitHub and Visual Studio. •Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 days ago

Apply

3.0 - 5.0 years

14 - 19 Lacs

Mumbai, Pune

Work from Office

Naukri logo

Company: Marsh McLennan Agency Description: Marsh McLennan is seeking candidates for the following position based in the Pune office. Senior Engineer/Principal Engineer What can you expect We are seeking a skilled Data Engineer with 3 to 5 years of hands-on experience in building and optimizing data pipelines and architectures. The ideal candidate will have expertise in Spark, AWS Glue, AWS S3, Python, complex SQL, and AWS EMR. What is in it for you Holidays (As Per the location) Medical & Insurance benefits (As Per the location) Shared Transport (Provided the address falls in service zone) Hybrid way of working Diversify your experience and learn new skills Opportunity to work with stakeholders globally to learn and grow We will count on you to: Design and implement scalable data solutions that support our data-driven decision-making processes. What you need to have: SQL and RDBMS knowledge - 5/5. Postgres. Should have extensive hands-on Database systems carrying tables, schema, views, materialized views. AWS Knowledge. Core and Data engineering services. Glue/ Lambda/ EMR/ DMS/ S3 - services in focus. ETL data:dge :- Any ETL tool preferably Informatica. Data warehousing. Big data:- Hadoop - Concepts. Spark - 3/5 Hive - 5/5 Python/ Java. Interpersonal skills:- Excellent communication skills and Team lead capabilities. Understanding of data systems well in big organizations setup. Passion deep diving and working with data and delivering value out of it. What makes you stand out Databricks knowledge. Any Reporting tool experience. Preferred MicroStrategy. Marsh McLennan (NYSEMMC) is the worlds leading professional services firm in the areas ofrisk, strategy and people. The Companys more than 85,000 colleagues advise clients in over 130 countries.With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses.Marshprovides data-driven risk advisory services and insurance solutions to commercial and consumer clients.Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com , or follow us onLinkedIn andX . Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person. Marsh McLennan (NYSEMMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.

Posted 2 days ago

Apply

3.0 - 6.0 years

40 - 45 Lacs

Kochi, Kolkata, Bhubaneswar

Work from Office

Naukri logo

We are seeking experienced Data Engineers with over 3 years of experience to join our team at Intuit, through Cognizant. The selected candidates will be responsible for developing and maintaining scalable data pipelines, managing data warehousing solutions, and working with advanced cloud environments. The role requires strong technical proficiency and the ability to work onsite in Bangalore. Key Responsibilities: Design, build, and maintain data pipelines to ingest, process, and analyze large datasets using PySpark. Work on Data Warehouse and Data Lake solutions to manage structured and unstructured data. Develop and optimize complex SQL queries for data extraction and reporting. Leverage AWS cloud services such as S3, EC2, EMR, Athena, and Redshift for data storage, processing, and analytics. Collaborate with cross-functional teams to ensure the successful delivery of data solutions that meet business needs. Monitor data pipelines and troubleshoot any issues related to data integrity or system performance. Required Skills: 3 years of experience in data engineering or related fields. In-depth knowledge of Data Warehouses and Data Lakes. Proven experience in building data pipelines using PySpark. Strong expertise in SQL for data manipulation and extraction. Familiarity with AWS cloud services, including S3, EC2, EMR, Athena, Redshift, and other cloud computing platforms. Preferred Skills: Python programming experience is a plus. Experience working in Agile environments with tools like JIRA and GitHub.

Posted 2 days ago

Apply

3.0 - 7.0 years

11 - 16 Lacs

Gurugram

Work from Office

Naukri logo

Project description We are looking for the star Python Developer who is not afraid of work and challenges! Gladly becoming a partner with famous financial institution, we are gathering a team of professionals with wide range of skills to successfully deliver business value to the client. Responsibilities Analyse existing SAS DI pipelines and SQL-based transformations. Translate and optimize SAS SQL logic into Python code using frameworks such as Pyspark. Develop and maintain scalable ETL pipelines using Python on AWS EMR. Implement data transformation, cleansing, and aggregation logic to support business requirements. Design modular and reusable code for distributed data processing tasks on EMR clusters. Integrate EMR jobs with upstream and downstream systems, including AWS S3, Snowflake, and Tableau. Develop Tableau reports for business reporting. Skills Must have 6+ years of experience in ETL development, with at least 5 years working with AWS EMR. Bachelor's degree in Computer Science, Data Science, Statistics, or a related field. Proficiency in Python for data processing and scripting. Proficient in SQL and experience with one or more ETL tools (e.g., SAS DI, Informatica)/. Hands-on experience with AWS servicesEMR, S3, IAM, VPC, and Glue. Familiarity with data storage systems such as Snowflake or RDS. Excellent communication skills and ability to work collaboratively in a team environment. Strong problem-solving skills and ability to work independently. Nice to have N/A Other Languages EnglishB2 Upper Intermediate Seniority Senior

Posted 2 days ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Hybrid

Naukri logo

PF Detection is mandatory : Managing data storage solutions on AWS, such as Amazon S3, Amazon Redshift, and Amazon DynamoDB. Implementing and optimizing data processing workflows using AWS services like AWS Glue, Amazon EMR,and AWS Lambda. Working with Spotfire Engineers and business analysts to ensure data is accessible and usable for analysisand visualization. Collaborating with other engineers, and business stakeholders to understand requirements and deliversolutions. Writing code in languages like SQL, Python, or Scala to build and maintain data pipelines and applications. Using Infrastructure as Code (IaC) tools to automate the deployment and management of datainfrastructure. A strong understanding of core AWS services, cloud concepts, and the AWS Well-Architected Framework Conduct an extensive inventory/evaluation of existing environments workflows. Designing and developing scalable data pipelines using AWS services to ensure efficient data flow andprocessing. Integrating / combining diverse data sources to maintain data consistency and reliability. Working closely with data engineers and other stakeholders to understand data requirements and ensureseamless data integration. Build and maintain CI/CD pipelines. Kindly Acknowledge back to this mail with updated Resume.

Posted 2 days ago

Apply

2.0 - 4.0 years

3 - 8 Lacs

Noida

Hybrid

Naukri logo

Position: Product Analyst (US Healthcare) About the job The Product Analyst plays a crucial role in driving product development and strategy within the US healthcare sector. They possess a strong understanding of healthcare systems and help organizations achieve their business goals by analyzing product requirements, identifying opportunities for enhancement, and collaborating with cross-functional teams. They support product lifecycles by aligning product strategies with business objectives and regulatory compliance, ensuring seamless execution of product initiatives. Responsibilities Collaborate with internal teams and external stakeholders to gather and analyze product requirements focused on the US healthcare system. Perform data-driven analysis of market trends, user behavior, and product performance to drive strategic product decisions and support business cases for new features. Create functional specifications, including wireframes and prototypes, to represent product ideas clearly and communicate feature functionality to stakeholders and development teams. Maintain the product roadmap, ensuring that features align with business objectives and user needs. Document product requirements through detailed Product Requirement Documents (PRDs), user stories, and acceptance criteria. Ensure product initiatives comply with US healthcare regulations (HIPAA) and align with industry best practices for privacy and security. Facilitate communication between stakeholders and development teams to ensure alignment and timely updates throughout the product lifecycle. Contribute to process improvements related to product analysis, requirement documentation, and stakeholder communication. Oversee development efforts, validate functionality, and ensure the product meets specified requirements. Assist in the creation of test plans and test cases for product validation in collaboration with QA and business testing teams. As a Product Owner Prepare and present strategic ideas to stakeholders to align product direction with business needs. Define product features based on customer requirements and market trends. Lead the development process by managing the product roadmap, ensuring alignment with business objectives. Act as the main point of contact between teams and stakeholders, ensuring smooth communication and workflow. Organize and prioritize the product backlog according to business and user requirements. Knowledge & Skills Experience in the US healthcare industry, with a strong understanding of healthcare regulations, such as HIPAA, and industry workflows. Ability to analyze complex business requirements and provide data-driven insights for product enhancements. Proficiency in wireframing and prototyping tools such as Balsamiq, Figma, or Sketch, as well as documentation tools like JIRA and Confluence. Excellent interpersonal skills, with strong written and verbal communication, to manage stakeholder relationships and collaborate with cross-functional teams. Familiarity with healthcare platforms, patient assistance programs, or electronic medical record systems is a plus. Ability to work in a fast-paced, collaborative environment and manage multiple tasks simultaneously. Preferred Qualifications 2-4 years of experience in Product Analysis or Product Management within the US healthcare sector. Familiarity with healthcare claims processes, benefit verification, patient electronic medical record system, revenue cycle management or patient assistance programs. Strong problem-solving skills and experience working with cross-functional teams to deliver product solutions.

Posted 2 days ago

Apply

4.0 - 6.0 years

10 - 20 Lacs

Noida

Hybrid

Naukri logo

Position: Product Analyst (US Healthcare) About the job The Product Analyst plays a crucial role in driving product development and strategy within the US healthcare sector. They possess a strong understanding of healthcare systems and help organizations achieve their business goals by analyzing product requirements, identifying opportunities for enhancement, and collaborating with cross-functional teams. They support product lifecycles by aligning product strategies with business objectives and regulatory compliance, ensuring seamless execution of product initiatives. Responsibilities Collaborate with internal teams and external stakeholders to gather and analyze product requirements focused on the US healthcare system. Perform data-driven analysis of market trends, user behavior, and product performance to drive strategic product decisions and support business cases for new features. Create functional specifications, including wireframes and prototypes, to represent product ideas clearly and communicate feature functionality to stakeholders and development teams. Maintain the product roadmap, ensuring that features align with business objectives and user needs. Document product requirements through detailed Product Requirement Documents (PRDs), user stories, and acceptance criteria. Ensure product initiatives comply with US healthcare regulations (HIPAA) and align with industry best practices for privacy and security. Facilitate communication between stakeholders and development teams to ensure alignment and timely updates throughout the product lifecycle. Contribute to process improvements related to product analysis, requirement documentation, and stakeholder communication. Oversee development efforts, validate functionality, and ensure the product meets specified requirements. Assist in the creation of test plans and test cases for product validation in collaboration with QA and business testing teams. As a Product Owner Prepare and present strategic ideas to stakeholders to align product direction with business needs. Define product features based on customer requirements and market trends. Lead the development process by managing the product roadmap, ensuring alignment with business objectives. Act as the main point of contact between teams and stakeholders, ensuring smooth communication and workflow. Organize and prioritize the product backlog according to business and user requirements. Knowledge & Skills Experience in the US healthcare industry, with a strong understanding of healthcare regulations, such as HIPAA, and industry workflows. Ability to analyze complex business requirements and provide data-driven insights for product enhancements. Proficiency in wireframing and prototyping tools such as Balsamiq, Figma, or Sketch, as well as documentation tools like JIRA and Confluence. Excellent interpersonal skills, with strong written and verbal communication, to manage stakeholder relationships and collaborate with cross-functional teams. Familiarity with healthcare platforms, patient assistance programs, or electronic medical record systems is a plus. Ability to work in a fast-paced, collaborative environment and manage multiple tasks simultaneously. Preferred Qualifications 4-6 years of experience in Product Analysis or Product Management within the US healthcare sector. Familiarity with healthcare claims processes, reimbursement services, patient electronic medical record system, revenue cycle management or patient assistance programs. Strong problem-solving skills and experience working with cross-functional teams to deliver product solutions

Posted 2 days ago

Apply

6.0 - 11.0 years

8 - 15 Lacs

Pune

Hybrid

Naukri logo

- Exp in developing applications using Python, Glue(ETL), Lambda, step functions services in AWS EKS, S3, Glue, EMR, RDS Data Stores, CloudFront, API Gateway - Exp in AWS services such as Amazon Elastic Compute (EC2), Glue, Amazon S3, EKS, Lambda Required Candidate profile - 7+ years of exp in software development and technical leadership, preferably having a strong financial knowledge in building complex trading applications. - Research and evaluate new technologies

Posted 2 days ago

Apply

5.0 - 7.0 years

1 - 1 Lacs

Lucknow

Hybrid

Naukri logo

Technical Experience 5-7 years of hands-on experience in data pipeline development and ETL processes 3+ years of deep AWS experience , specifically with Kinesis, Glue, Lambda, S3, and Step Functions Strong proficiency in NodeJS/JavaScript and Java for serverless and containerized applications Production experience with Apache Spark, Apache Flink, or similar big data processing frameworks Data Engineering Expertise Proven experience with real-time streaming architectures and event-driven systems Hands-on experience with Parquet, Avro, Delta Lake, and columnar storage optimization Experience implementing data quality frameworks such as Great Expectations or similar tools Knowledge of star schema modeling, slowly changing dimensions, and data warehouse design patterns Experience with medallion architecture or similar progressive data refinement strategies AWS Skills Experience with Amazon EMR, Amazon MSK (Kafka), or Amazon Kinesis Analytics Knowledge of Apache Airflow for workflow orchestration Experience with DynamoDB, ElastiCache, and Neptune for specialized data storage Familiarity with machine learning pipelines and Amazon SageMaker integration

Posted 2 days ago

Apply

0.0 - 1.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

:. Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more. . We are seeking a Medical Scriber with a background in healthcare and medical documentation. You will be responsible for assisting in the documentation of medical records and procedures. Key Responsibilities:. Accurately transcribe patient records and medical data. Assist in creating comprehensive medical reports and documentation. Ensure that records comply with relevant standards and regulations. Required Qualifications:. 1+ years of experience as a medical scribe or in medical transcription. Strong knowledge of medical terminology. Familiarity with electronic health record (EHR) systems. Why Join Us. Competitive pay (‚1200/hour). Flexible hours. Remote opportunity. NOTEPay will vary by project and typically is up to Rs. 1200 per hour (if you work an average of 3 hours every day that could be as high as Rs. . Shape the future of AI with Soul AI!.

Posted 2 days ago

Apply

0.0 - 1.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

:. Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more. . We are seeking a Medical Scriber with a background in healthcare and medical documentation. You will be responsible for assisting in the documentation of medical records and procedures. Key Responsibilities:. Accurately transcribe patient records and medical data. Assist in creating comprehensive medical reports and documentation. Ensure that records comply with relevant standards and regulations. Required Qualifications:. 1+ years of experience as a medical scribe or in medical transcription. Strong knowledge of medical terminology. Familiarity with electronic health record (EHR) systems. Why Join Us. Competitive pay (‚1200/hour). Flexible hours. Remote opportunity. NOTEPay will vary by project and typically is up to Rs. 1200 per hour (if you work an average of 3 hours every day that could be as high as Rs. . Shape the future of AI with Soul AI!.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies