Jobs
Interviews

488 Emr Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 10 years

12 - 22 Lacs

Gurugram

Hybrid

Role & responsibilities Skill - Python AI ML Location - Gurgaon Exp - 5+ Notice period - Immediate JD Key Responsibilities: Design, develop, and maintain robust, scalable Python applications and services. Work closely with data scientists, ML engineers, and product teams to integrate machine learning models into production systems. Develop and optimize AI-enabled applications, including RAG pipelines, vector search, or recommendation engines. Build APIs and backend services for real-time and batch AI/ML workflows. Contribute to infrastructure for data preprocessing, feature engineering, and inference pipelines. Ensure code quality, testing, and documentation across the lifecycle. Stay up to date with emerging trends in AI, including LLMs, embeddings, and neural search. Required Qualifications: 5+ years of professional experience in Python development. Strong understanding of data structures, algorithms, and software architecture principles. Experience with web frameworks (e.g., Flask, FastAPI, Django). Familiarity with RESTful APIs, microservices, and distributed systems. Exposure to machine learning workflows, including data ingestion, model serving, or model monitoring. Knowledge of version control systems (Git) and CI/CD pipelines. Ability to work in cross-functional teams and communicate technical concepts clearly. Preferred Qualifications: Experience with LLMs, RAG architectures, or vector databases (e.g., FAISS, Pinecone, Weaviate). Familiarity with ML frameworks such as TensorFlow, PyTorch, Scikit-learn, or LangChain. Understanding of prompt engineering, tokenization, embeddings, and NLP pipelines. Hands-on experience with cloud platforms (AWS, GCP, or Azure). Exposure to containerization and orchestration tools (Docker, Kubernetes).

Posted 2 months ago

Apply

2 - 5 years

3 - 7 Lacs

Gurugram

Work from Office

Role Data Engineer Skills: Data Modeling:* Design and implement efficient data models, ensuring data accuracy and optimal performance. ETL Development:* Develop, maintain, and optimize ETL processes to extract, transform, and load data from various sources into our data warehouse. SQL Expertise:* Write complex SQL queries to extract, manipulate, and analyze data as needed. Python Development:* Develop and maintain Python scripts and applications to support data processing and automation. AWS Expertise:* Leverage your deep knowledge of AWS services, such as S3, Redshift, Glue, EMR, and Athena, to build and maintain data pipelines and infrastructure. Infrastructure as Code (IaC):* Experience with tools like Terraform or CloudFormation to automate the provisioning and management of AWS resources is a plus. Big Data Processing:* Knowledge of PySpark for big data processing and analysis is desirable. Source Code Management:* Utilize Git and GitHub for version control and collaboration on data engineering projects. Performance Optimization:* Identify and implement optimizations for data processing pipelines to enhance efficiency and reduce costs. Data Quality:* Implement data quality checks and validation procedures to maintain data integrity. Collaboration:* Work closely with data scientists, analysts, and other teams to understand data requirements and deliver high-quality data solutions. Documentation:* Maintain comprehensive documentation for all data engineering processes and projects.

Posted 2 months ago

Apply

3 - 7 years

5 - 10 Lacs

Thiruvananthapuram

Remote

Job Summary The Insurance Verification Manager will be responsible for overseeing the insurance verification process, ensuring timely and accurate verification of patient insurance eligibility and benefits. This role involves managing a team, optimizing workflows, and leveraging advanced features to enhance operational efficiency and patient experience. Key Responsibilities: Supervise and mentor the insurance verification team, setting performance goals and conducting regular evaluations. Provide training and support to team members on insurance verification tools and best practices. Ensure timely verification of patient insurance eligibility, benefits, coverage levels, exclusions, and limitations. Monitor and manage the verification process, addressing any discrepancies or issues promptly. Utilize efficient scheduling and patient list management techniques to prioritize verification tasks. Implement and maintain insurance templates to streamline data entry and reduce errors. Coordinate with other departments to ensure seamless integration of insurance verification with scheduling, billing, and patient care. Communicate with insurance providers to resolve verification issues and stay updated on policy changes. Generate and analyze reports on verification metrics, claim statuses, and aging balances. Ensure compliance with HIPAA and other regulatory requirements in all insurance verification activities. Qualification Bachelors degree in Healthcare Administration, Business Administration, or a related field. Minimum of 5 years of experience in insurance verification or healthcare revenue cycle management, with at least 2 years in a managerial or supervisory role. In-depth knowledge of insurance policies, eligibility criteria, coverage details, and claims processes. Proficiency in using insurance verification software, practice management systems, or related healthcare management tools. Strong understanding of HIPAA regulations and other healthcare compliance requirements. Excellent leadership and team management skills. Strong analytical and problem-solving skills, with the ability to make data-driven decisions. Effective communication and interpersonal skills, with the ability to coordinate with cross-functional teams. Proficiency in Microsoft Office Suite (Word, Excel, PowerPoint) and experience with data analysis tools. Professional certifications in healthcare management or medical billing and coding (e.g., CPC, AAPC, CHAM) are a plus. Willing to work in night shifts

Posted 2 months ago

Apply

2 - 7 years

2 - 4 Lacs

Mumbai, Mumbai Suburban, Navi Mumbai

Work from Office

CRM Executive uses CRM software to track interactions, maintain customer data, nurture leads, and deliver service to boost sales, loyalty, and repeat business in the jewellery industry, staying informed on products, trends, and preferences. Required Candidate profile Strong Communication Skills Customer Service Expertise CRM Proficiency Sales Acumen Jewellery Industry Knowledge Data Analysis Skills Proficiency in MS Office Suite Perks and benefits Annual Bonus, Leave Encashments, Over Time, Leaves

Posted 2 months ago

Apply

3 - 6 years

6 - 7 Lacs

Ahmedabad

Work from Office

Service Excellence Lead unit and departmental process improvement programs to enhance patient satisfaction. Implementing the designed programs which create a positive patient experience and promote service excellence. Foster a sense of teamwork in all of the organization to active participate in process improvement and drive culture changes. Create a system to handle, respond and address the complaints from patients and visitor in timely manner. Regular review, action and preventive measures to ensure process are in place and repeat complaints are eradicated. provide communications, training to coordinators and staff at all levels of the unit to ensure competency, commitment and compliance with service excellence programs and initiatives. To ensure world class patient experience that reflects in Google rating, Net Promoter Score, Feedback and Star rating etc. Implement all digital initiative of organization like NH care App, EMR, Online appointment, System and provide feedback to software team. Create awareness and drive utilization of digital service transformation program among staff, other department and customers. Perform active and passive service excellence audits to understand and identify the required training program to raise the bar at all the levels of service delivery. Prepare daily/weekly/monthly reports on service excellence transformation progress / service quality / customer satisfaction and all negative and critical feedback analysis to the senior management. Conducting weekly/monthly review with the internal team for the necessary steps for the improvement. EMR management Training users' consultants, junior doctors, nursing staff, pharmacists and technicians. New joiners/users Introduce EMR, create their relevant configurations with IT support and provide product demo. Train users about potential uses of the existing product. New functionalities - Provide orientation to the users for new functionalities that gets implemented in each version / product upgrade (eg: Drug Interaction Alert, Pharmacy Update, Investigation list Update, CIMS clinical decision support system and its features to be introduced to all users). Data tracking and management for every roll out; measure and report key milestones on EMR adoption and focus on improving the adoption rate. Collaborative Responsivity Reporting the bug to central EMR team, perform user acceptance testing once resolved and get the end user sign off Understand the complexities of IP environment and the unique / specific requirements of the unit and plan a smooth roll out to improve the acceptance of all stakeholders Understand new end user requirements, prioritize and communicate to the central EMR team through proper documentation.

Posted 2 months ago

Apply

4 - 7 years

12 - 16 Lacs

Hyderabad

Work from Office

We are looking for a highly skilled Full Stack Developer with hands-on experience in Python, GenAI, and AWS cloud services. The ideal candidate should have proficiency in backend development using NodeJS, ExpressJS, Python Flask/FastAPI, and RESTful API design. On the frontend, strong skills in Angular, ReactJS, TypeScript, etc. are required. ### Roles and Responsibility Design and develop cloud-native applications and services using AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, Glue, Redshift, EMR. Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy to automate application deployment and updates. Collaborate with architects and other engineers to design scalable and secure application architectures on AWS. Monitor application performance and implement optimizations to enhance reliability, scalability, and efficiency. Implement security best practices for AWS applications, including identity and access management (IAM), encryption, and secure coding practices. Design and deploy containerized applications using AWS services such as Amazon ECS (Elastic Container Service), Amazon EKS (Elastic Kubernetes Service), and AWS Fargate. Configure and manage container orchestration, scaling, and deployment strategies, while optimizing container performance and resource utilization by tuning settings and configurations. Implement and manage application observability tools such as AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack (Elasticsearch, Logstash, Kibana). Develop and configure monitoring, logging, and alerting systems to provide insights into application performance and health, creating dashboards and reports to visualize application metrics and logs for proactive monitoring and troubleshooting. Integrate AWS services with application components and external systems, ensuring smooth and efficient data flow and diagnose/troubleshoot issues related to application performance, availability, and reliability. Create and maintain comprehensive documentation for application design, deployment processes, and configuration.### Job Requirements Proficiency in AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Glue, Redshift, EMR. Experience in developing and deploying AI solutions with Python and JavaScript. Strong background in machine learning, deep learning, and data modeling. Good understanding of Agile methodologies and version control systems like Git. Familiarity with container orchestration concepts and tools, including Kubernetes and Docker Swarm. Understanding of AWS security best practices, including IAM, KMS, and encryption. Observability Tools: Proficiency in using observability tools like AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack. Monitoring: Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or ELK Stack. Collaboration: Strong teamwork and communication skills with the ability to work effectively with cross-functional teams.

Posted 2 months ago

Apply

10 - 14 years

11 - 15 Lacs

Noida

Work from Office

Position Summary You will manage and oversee the development & deployment of high-quality software products. You will ensure that the development teams adopt and follow modern engineering practices to deliver a high-quality, high-value product. You will be responsible towards working with different stakeholders to accomplish business and software engineering goals. You will improve the teams capabilities, improve engagement, and minimize business risks. Key duties & responsibilities Develop high performing teams that are equipped with right capabilities in terms of skills, tools, technology, and resources to continuously deliver high-quality and high-value software. Collaborate with the -Agile practitioners to help avoid distractions for the team, so that the team is focused on delivering their sprint commitments. Drive adoption of modern engineering practices such as Continuous Integration, Continuous Deployment, Code Reviews, TDD, Functional\Non-Functional testing, Test Automation, Performance Engineering etc. to deliver high-quality, high-value software for 2-3 scrum teams Craft individual development plans for team members and provide growth opportunities Mentor, coach and challenge the team members to continuously raise performance standards. Align the teams with organizations vision, mission and strategic priorities and act as a key communication channel between team and senior leadership. Assess and provide team members timely feedback and conduct 360 feedback for self and teams assessment. Foster a culture and mindset of continuous learning improvement to develop agility using the three pillars transparency, inspection and adaptation across levels and geographies. Qualification B.E/B. Tech/MCA or equivalent professional degree Experience 10+ years of experience in building web-based enterprise software using Microsoft .NET technology stack. Demonstrable experience of leading teams of highly skilled software engineers (15-25 team members) and working successfully across cultures. Must have solid understanding of modern architectural and design patterns. Comprehensive knowledge of automation testing and modern testing practices e.g. TDD, BDD etc. Well versed with C#, modern UI technologies and database\ORM technologies. Strong exposure to Agile software development methodologies and enabling tools such as Jira, Confluence Excellent communicator with ability demonstrable ability of influencing decisions Knowledge of healthcare revenue cycle management, HL7, EMR systems, HIPAA, FHIR would be preferred. Key competency profile Spot new opportunities by anticipating change and planning accordingly Find ways to better serve customers and patients. Be accountable for customer service of highest quality Create connections across teams by valuing differences and including others Own your development by implementing and sharing your learnings Motivate each other to perform at our highest level Help people improve by learning from successes and failures Work the right way by acting with integrity and living our values every day Succeed by proactively identifying problems and solutions for yourself and others. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visit: Visit us on

Posted 2 months ago

Apply

6 - 11 years

10 - 20 Lacs

Mumbai, Hyderabad, Bengaluru

Hybrid

Hi, We are hiring for Java Developer with one of the Leading MNC for Hyderabad, Bangalore & Mumbai Location. Experience - 6 Years - 12 Years Location - Bangalore , Hyderabad , Chennai CTC - As per company norms Please find below the Job Description: Mandatory Skills - Java, Spring-boot, Microservices, AWS. Kubernetes - Good to have.. Description : Expertise in development using Core Java, J2EE, Spring Boot, Microservices, and Web Services SOA experience SOAP as well as Restful with JSON formats, with Messaging Kafka. Working proficiency in enterprise developmental toolsets like Jenkins, Git/Bitbucket, Sonar, Black Duck, Splunk, Apigee, etc. Experience in AWS cloud monitoring tools like Datadog, Cloudwatch, and Lambda is needed. Experience with XACML authorization policies. Experience in NoSQL and SQL databases such as Cassandra, Aurora, and Oracle. Good understanding of React JS, Photon framework, Design, Kubernetes Working with GIT/Bitbucket, Maven, Gradle, and Jenkins tools to build and deploy code deployment to production environments. Primary Location: IN-KA-Bangalore Schedule: Full Time Shift: Experienced Employee Status: Individual Contributor Job Type: Full-time Kindly fill in the below-mentioned details to proceed ahead with your profile Total Experience - Relevant in Java - Exp in Multithreading - Exp in Microservices - Exp in Spring Boot - Exp in Kafka - Exp in AWS - Exp in Kubernetes - Current Designation - Current Organization - Current Location - Current CTC + Variable - Any Offer in hand - Expected CTC + Variable - Notice Period / LWD - Reason for Relocation to Bangalore - If interested Kindly share your resume at nupur.tyagi@mounttalent.com

Posted 2 months ago

Apply

3 - 8 years

13 - 15 Lacs

Bengaluru

Work from Office

Company Profile: Job Title: Senior Software Engineer - Node JS, Terraform with AWS Position: Senior Software Engineer Experience: 5-8 Years Category: Software Development/ Engineering Main location: Bangalore Position ID: J0525-0430 Employment Type: Full Time Familiarity with ORM/ODM libraries (e.g., Sequelize, Mongoose). Proficiency in using Git for version control. Understanding of testing frameworks (e.g., Jest, Mocha, Chai) and writing unit and integration tests. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Design and implement efficient database schemas and ensure data integrity. Write clean, well-documented, and testable code. Participate in code reviews to ensure code quality and adherence to coding standards. Troubleshoot and debug issues in development and production environments. Knowledge of security best practices for web applications (authentication, authorization, data validation). Strong communication and collaboration skills. Effective communication skills to interact with technical and non-technical stakeholders. Required qualifications to be successful in this role: We are looking for an experienced Android Developer to join our team. The ideal candidate should be passionate about coding and developing scalable and high-performance applications. You will work closely with our front-end developers, designers, and other members of the team to deliver quality solutions that meet the needs of our clients. Qualification: Bachelors degree in Computer Science or related field or higher with minimum 3 years of relevant experience. Must-Have Skills: Design, develop, and maintain robust and scalable server-side applications using Node.js and JavaScript/TypeScript. Develop and consume RESTful APIs and integrate with third-party services. In-depth knowledge of AWS cloud including familiarity with services such as S3, Lambda, DynamoDB, Glue, Apache Airflow, SQS, SNS, ECS and Step Functions, EMR, EKS (Elastic Kubernetes Service), Key Management Service, Elastic MapReduce Handon Experience on Terraform Specializing in designing and developing fully automated end-to-end data processing pipelines for large-scale data ingestion, curation, and transformation. Experience in deploying Spark-based ingestion frameworks, testing automation tools, and CI/CD pipelines. Knowledge of unit testing frameworks and best practices. Working experience in databases- SQL and NO-SQL (preferred)-including joins, aggregations, window functions, date functions, partitions, indexing, and performance improvement ideas. Experience with database systems such as Oracle, MySQL, PostgreSQL, MongoDB, or other NoSQL databases. Skills: Node.Js RESTful (Rest-APIs) Terraform

Posted 2 months ago

Apply

3 - 5 years

4 - 8 Lacs

Hyderabad

Work from Office

Sr Associate Software Engineer – Finance What you will do The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Preferred Qualifications: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

12 - 16 years

35 - 40 Lacs

Bengaluru

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 40 Lacs

Chennai

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 40 Lacs

Mumbai

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 40 Lacs

Kolkata

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Chandigarh

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Vadodara

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Visakhapatnam

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Thiruvananthapuram

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Coimbatore

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Hyderabad

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Nagpur

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Jaipur

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Lucknow

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Kanpur

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Pune

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies