Jobs
Interviews

521 Emr Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

10 - 14 Lacs

Chennai

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. The Role in BriefThe Data Integration Analyst, is responsible for implementing, maintaining, and supporting HL7 interfaces between customers, both external and internal, and Optum’s integration platforms. The Engineer will work in a team, but will have individual assignments that he/she will work on independently. Engineers are expected to work under aggressive schedules, be self-sufficient, work within established standards, and be able to work on multiple assignments simultaneously. Candidates must be willing to work in a 24/7 environment and will be on-call as needed for critical issues. Primary Responsibilities Interface Design and Development: Interface AnalysisHL7 message investigation to determine gaps or remediate issues Interface Design, Development, and Delivery - Interface planning, filtering, transformation, and routing Interface ValidationReview, verification, and monitoring to ensure delivered interface passes acceptance testing Interface Go-Live and Transition to SupportCompleting cutover events with teams / partners and executing turnover procedures for hand-off Provider EnrollmentsProvisioning and documentation of all integrations Troubleshooting and Support: Issue ResolutionTroubleshoot issues raised by alarms, support, or project managers from root cause identification to resolution Support RequestsHandle tier 2 / 3 support requests and provide timely solutions to ensure client satisfaction Enhancements / MaintenanceEnsuring stable and continuous data delivery Collaboration and Communication: Stakeholder InteractionWork closely with Clients, Project Managers, Product managers and other stakeholders to understand requirements and deliver solutions DocumentationContribute to technical documentation of specifications and processes CommunicationEffectively communicate complex concepts, both verbally and in writing, to team members and clients Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Basic Qualifications EducationBachelor’s degree in Computer Science or any engineering field Experience2+ years of experience working with HL7 data and Integration Engines or Platforms Technical AptitudeAbility to learn new technologies Skills: Proven solid analytical and problem-solving skills Required Qualifications Undergraduate degree or equivalent experience HL7 Standards knowledgeHL7 v2, v3, CDA Integration Tools knowledgeInter Systems IRIS, Infor Cloverleaf, NextGen Mirth Connect, or equivalent Cloud Technology knowledgeAzure or AWS Scripting and StructureProficiency in T-SQL and procedural scripting, XML, JSON Preferred Qualifications HL7 Standards knowledge HL7 FHIR, US Core Integration Tools knowledge Inter Systems Ensemble or IRIS, Cache Scripting and Structure knowledge Object Script, Perl, TCL, Java script US Health care Knowledge Health Information SystemsWorking knowledge Clinical Data Analysis knowledge Clinical ProcessesUnderstanding of clinical processes and vocabulary Soft Skills Analytical and CreativeHighly analytical, curious, and creative OrganizedProven solid organization skills and attention to detail OwnershipTakes ownership of responsibilities At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 months ago

Apply

3.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.

Posted 2 months ago

Apply

5.0 - 10.0 years

18 - 32 Lacs

Hyderabad

Hybrid

Greetings from AstroSoft Technologies We are Back with Exciting job opportunity for AWS Data Engineer professionals. Join our Growing Team & Explore with us at Hyderabad office (Hybrid- Gachibowli) No.of Openings - 10 Positions Role : AWS Data Engineer Project Domain: USA Client-BFSI, Fintech Experience: 5+ Years Work Location : Hyderabad (Hybrid - Gachibowli) Job Type: Full-Time Company: AstroSoft Technologies (https://www.astrosofttech.com/) Astrosoft is an award-winning company that specializes in the areas of Data, Analytics, Cloud, AI/ML, Innovation, Digital. We have a customer first mindset and take extreme ownership in delivering solutions and projects for our customers and have consistently been recognized by our clients as the premium partner to work with. We bring to bear top tier talent, a robust and structured project execution framework, our significant experience over the years and have an impeccable record in delivering solutions and projects for our clients. Founded in 2004 , Headquarters in Florida, Texas-,USA, Corporate Office - India, Hyderabad Benefits from Astrosoft Technologies H1B Sponsorship (Depends on Project & Performance) Lunch & Dinner (Every day) Health Insurance Coverage- Group Industry Standards Leave Policy Skill Enhancement Certification Hybrid Mode JOB DETAILS: Role: Senior AWS Data Engineer Location : India, Hyderabad, Gachibowli (Vasavi SkyCity) Job Type : Full Time Shift Timings : 12.30 PM to 9.30 PM IST Experience Range - 5+ yrs. Work Mode: Hybrid (Fri & Mon-WFH) Interview Process : 3 Tech Rounds Job Summary: Strong experience and understanding of streaming architecture and development practices using kafka , Kinesis , spark , flink etc, Strong AWS development experience using S3 , SNS , SQS, MWAA ( Airflow ) Glue , DMS and EMR . Strong knowledge of one or more programing languages Python /Java/Scala (ideally Python ) Experience using Terraform to build IAC components in AWS . Strong experience with ETL Tools in AWS ; ODI experience is as plus. Strong experience with Database Platforms: Oracle, AWS Redshift Strong experience in SQL tuning, tuning ETL solutions, physical optimization of databases. Very familiar with SRE concepts which includes evaluating and implementing monitoring and observability tools like Splunk , Data Dog, CloudWatch and other job, log or dashboard concepts for customer support and application health checks. Ability to collaborate with our business partners to understand and implement their requirements. Excellent interpersonal skills and be able to build consensus across teams. Strong critical thinking and ability to think out-of-the box. Self-motivated and able to perform under pressure. AWS certified (preferred) Thanks & Regards Karthik Kumar HR-TAG Lead -India Astrosoft Technologies, Unit 1810, level 18, Vasavi Sky city, Gachibowli, Hyderabad, Telangana 500081. Contact: +91-8712229084 Email: karthik.jangam@astrosofttech.com

Posted 2 months ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Notice Period : Immediate - 30 days Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud JD and required Skills & Responsibilities : - Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. - Solve complex business problems by utilizing a disciplined development methodology. - Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. - Analyse the source and target system data. Map the transformation that meets the requirements. - Interact with the client and onsite coordinators during different phases of a project. - Design and implement product features in collaboration with business and Technology stakeholders. - Anticipate, identify, and solve issues concerning data management to improve data quality. - Clean, prepare, and optimize data at scale for ingestion and consumption. - Support the implementation of new data management projects and re-structure the current data architecture. - Implement automated workflows and routines using workflow scheduling tools. - Understand and use continuous integration, test-driven development, and production deployment frameworks. - Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards. - Analyze and profile data for the purpose of designing scalable solutions. - Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues. Required Skills : - 5+ years of relevant experience developing Data and analytic solutions. - Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark - Experience with relational SQL. - Experience with scripting languages such as Python. - Experience with source control tools such as GitHub and related dev process. - Experience with workflow scheduling tools such as Airflow. - In-depth knowledge of AWS Cloud (S3, EMR, Databricks) - Has a passion for data solutions. - Has a strong problem-solving and analytical mindset - Working experience in the design, Development, and test of data pipelines. - Experience working with Agile Teams. - Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders - Able to quickly pick up new programming languages, technologies, and frameworks. - Bachelor's degree in computer science Apply Insights Follow-up Save this job for future reference Did you find something suspicious? Report Here! Hide This Job? Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 2 months ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Notice Period : Immediate - 30 days Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud JD and required Skills & Responsibilities : - Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. - Solve complex business problems by utilizing a disciplined development methodology. - Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. - Analyse the source and target system data. Map the transformation that meets the requirements. - Interact with the client and onsite coordinators during different phases of a project. - Design and implement product features in collaboration with business and Technology stakeholders. - Anticipate, identify, and solve issues concerning data management to improve data quality. - Clean, prepare, and optimize data at scale for ingestion and consumption. - Support the implementation of new data management projects and re-structure the current data architecture. - Implement automated workflows and routines using workflow scheduling tools. - Understand and use continuous integration, test-driven development, and production deployment frameworks. - Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards. - Analyze and profile data for the purpose of designing scalable solutions. - Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues. Required Skills : - 5+ years of relevant experience developing Data and analytic solutions. - Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark - Experience with relational SQL. - Experience with scripting languages such as Python. - Experience with source control tools such as GitHub and related dev process. - Experience with workflow scheduling tools such as Airflow. - In-depth knowledge of AWS Cloud (S3, EMR, Databricks) - Has a passion for data solutions. - Has a strong problem-solving and analytical mindset - Working experience in the design, Development, and test of data pipelines. - Experience working with Agile Teams. - Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders - Able to quickly pick up new programming languages, technologies, and frameworks. - Bachelor's degree in computer science

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such asGlue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills and Qualifications Professional Experience:5+ years of experiencein data engineering or a related field. Programming: Strong proficiency inPython, with experience in libraries likepandas,pySpark,orboto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: AWS Gluefor ETL/ELT. S3for storage. RedshiftorAthenafor data warehousing and querying. Lambdafor serverless compute. KinesisorSNS/SQSfor data streaming. IAM Rolesfor security. Databases: Proficiency in SQL and experience withrelational(e.g., PostgreSQL, MySQL) andNoSQL(e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. Optional Skills Knowledge ofdata modelinganddata warehouse designprinciples. Experience withdata visualization tools(e.g., Tableau, Power BI). Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). Exposure to other programming languages like Scala or Java.

Posted 2 months ago

Apply

0.0 - 4.0 years

3 - 4 Lacs

Tirupati

Work from Office

Role & responsibilities Assessing patient conditions and providing necessary diagnosis Managing patient care and conducting follow up visits as needed Creating and updating patient medical records Conducting medical research Prescribing medication and treatment instruction

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :

Posted 2 months ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, Seeking a Cloud Monitoring Specialist to set up observability and real-time monitoring in cloud environments. Key Responsibilities: Configure logging and metrics collection. Set up alerts and dashboards using Grafana, Prometheus, etc. Optimize system visibility for performance and security. Required Skills & Qualifications: Familiar with ELK stack, Datadog, New Relic, or Cloud-native monitoring tools. Strong troubleshooting and root cause analysis skills. Knowledge of distributed systems. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 2 months ago

Apply

3.0 - 5.0 years

12 - 14 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Key Responsibilities: Design, develop, and maintain data pipelines and ETL workflows on AWS platform Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements Optimize data workflows for performance, scalability, and reliability Troubleshoot data issues, monitor jobs, and ensure data quality and integrity Write efficient SQL queries and automate data processing tasks Implement data security and compliance best practices Maintain technical documentation and data pipeline monitoring dashboards Required Skills: 3 to 5 years of hands-on experience as a Data Engineer on AWS Cloud Strong expertise with AWS data services: S3, Glue, Redshift, Athena, EMR, Lambda Proficient in SQL , Python, or Scala for data processing and scripting Experience with ETL tools and frameworks on AWS Understanding of data warehousing concepts and architecture Familiarity with CI/CD for data pipelines is a plus Strong problem-solving and communication skills Ability to work in Agile environment and handle multiple priorities Preferred candidate profile

Posted 2 months ago

Apply

1.0 - 3.0 years

1 - 5 Lacs

Noida, Mohali

Work from Office

Job Title : Talent Acquisition Associate ( Locums Recruiter) Location: Noida and Mohali Job Description: We are looking for a Medical Scriber based in Noida who is eager to transition into a career in US Healthcare Recruitment. If you're passionate about recruitment and have a keen interest in healthcare, this is the opportunity for you! What will you be doing day-to-day? Generate leads through sourcing initiatives, social media advertising, phone interviews, and a high-volume outbound call phone strategy Search Applicant Tracking System, social media and other platforms to find caregivers for our open physicians and doctors hiring . Establish relationships with Healthcare Providers and maintain a pipeline of providers to encourage a long-term working relationship Negotiate contract terms with candidates Partner with Account Managers to ensure candidate viability and arrange client interviews Benefits 5 Days Working (Fixed Shift) Recurring Incentives Free Night Meal Fast Career Growth Regards, Priyanka Verma Cynet Corp A: 21000 Atlantic Blvd, # 700, Sterling VA 20166 M: +91-9015097461 | E: priyanka.v@cynetcorp.com

Posted 2 months ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Duration: 8Months Job Type: Contract Work Type: Onsite The top 3 Responsibilities: Manage AWS resources including EC2, RDS, Redshift, Kinesis, EMR, Lambda, Glue, Apache Airflow etc. Build and deliver high quality data architecture and pipelines to support business analyst, data scientists, and customer reporting needs. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Leadership Principles: Ownership, Customer obsession, Dive Deep and Deliver results Mandatory requirements: 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL & SQL Tuning Basic to Mid-level proficiency in scripting with Python Education or Certification Requirements: Any Graduation

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Interested candidates can share their updated CV at: heena.ruchwani@gspann.com Join GSPANN Technologies as a Senior AWS Data Engineer and play a critical role in designing, building, and optimizing scalable data pipelines in the cloud. Were looking for an experienced engineer who can turn complex data into actionable insights using the AWS ecosystem. Key Responsibilities: Design, develop, and maintain scalable data pipelines on AWS. Work with large datasets to perform ETL/ELT transformations using tools like AWS Glue, EMR, and Lambda . Optimize and monitor data workflows , ensuring reliability and performance. Collaborate with data analysts, architects, and other engineers to build data solutions that support business needs. Implement and manage data lakes , data warehouses , and streaming architectures . Ensure data quality, governance, and security standards are met across platforms. Participate in code reviews , documentation, and mentoring of junior data engineers. Required Skills & Qualifications: 5+ years of experience in data engineering , with strong hands-on work in the AWS cloud ecosystem . Proficiency in Python , PySpark , and SQL . Strong experience with AWS services : AWS Glue , Lambda , EMR , S3 , Athena , Redshift , Kinesis , etc. Expertise in data pipeline development and workflow orchestration (e.g., Airflow , Step Functions ). Solid understanding of data warehousing and data lake architecture. Experience with CI/CD , version control (GitHub) , and DevOps practices for data environments. Familiarity with Snowflake , Databricks , or Looker is a plus. Excellent communication and problem-solving skills. Interested candidates can share their updated CV at: heena.ruchwani@gspann.com

Posted 2 months ago

Apply

8.0 - 10.0 years

8 - 12 Lacs

Pune

Work from Office

Role Purpose The role incumbent is focused on implementation of roadmaps for business process analysis, data analysis, diagnosis of gaps, business requirements & functional definitions, best practices application, meeting facilitation, and contributes to project planning. Consultants are expected to contribute to solution building for the client & practice. The role holder can handle higher scale and complexity compared to a Consultant profile and is more proactive in client interactions. Do Assumes responsibilities as the main client contact leading engagement w/ 10-20% support from Consulting & Client Partners. Develops, assesses, and validates a client’s business strategy, including industry and competitive positioning and strategic direction Develops solutions and services to suit client’s business strategy Estimates scope and liability for delivery of the end product/solution Seeks opportunities to develop revenue in existing and new areas Leads an engagement and oversees others’ contributions at a customer end, such that customer expectations are met or exceeded. Drives Proposal creation and presales activities for the engagement; new accounts Contributes towards the development of practice policies, procedures, frameworks etc. Guides less experienced team members in delivering solutions. Leads efforts towards building go-to-market/ off the shelf / point solutions and process smethodologies for reuse Creates reusable IP from managed projects Mandatory Skills: Telecom BSS NextGen Ops. Experience8-10 Years.

Posted 2 months ago

Apply

4.0 - 6.0 years

2 - 6 Lacs

Hyderabad, Pune, Gurugram

Work from Office

Job Title:Sr AWS Data Engineer Experience4-6 Years Location:Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : PySpark, Python, SQL, AWS Services - S3, Athena, Glue, EMR/Spark, Redshift, Lambda, Step Functions, IAM, CloudWatch.

Posted 2 months ago

Apply

5.0 - 7.0 years

2 - 5 Lacs

Pune

Work from Office

Job Title:Data Engineer Experience5-7Years Location:Pune : Roles & Responsibilities: Create and maintain optimal data pipeline architecture Build data pipelines that transform raw, unstructured data into formats that data analyst can use to for analysis Assemble large, complex data sets that meet functional / non-functional business requirements Identify, design, and implement internal process improvementsautomating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and delivery of data from a wide variety of data sources using SQL and AWS ‘Big Data’ technologies Work with stakeholders including the Executive, Product, and program teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems Develops and maintains scalable data pipelines and builds out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using HQL and 'Big Data' technologies Implements processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it Write unit/integration tests, contribute to engineering wiki, and document work Performs root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement Who You Are: You’re passionate about Data and building efficient data pipelines You have excellent listening skills and empathetic to others You believe in simple and elegant solutions and give paramount importance to quality You have a track record of building fast, reliable, and high-quality data pipelines Passionate with good understanding of data, with a focus on having fun, while delivering incredible business results Must have skills: AData Engineerwith 5+ years of relevant experience who is excited to apply their current skills and to grow their knowledge base. A Data Engineer who has attained a degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. Has experience using the following software/tools: Experience with big data tools:Hadoop, Spark, Kafka, Hive etc. Experience with relationalSQLandNoSQL databases, including Postgres and Cassandra Experience withdata pipelineandworkflow management tools Experience with AWS cloud services:EC2, EMR, RDS, Redshift Experience with object-oriented/object function scripting languages:Python, Java, Scala, etc. Experience withAirflow/Ozzie Experience inAWS/Spark/Python development Experience inGIT, JIRA, Jenkins, Shell scripting Familiar withAgile methodology,test-driven development, source control management and automated testing Build processes supporting data transformation, data structures, metadata, dependencies and workload management Experience supporting and working with cross-functional teams in a dynamic environment Nice to have skills: Experience with stream-processing systems:Storm, Spark-Streaming, etc. a plus Experience withSnowflake

Posted 2 months ago

Apply

6.0 - 9.0 years

10 - 20 Lacs

Hyderabad

Work from Office

This requirement to source profiles with 6-9years of overall experience, including minimum of 4years in Data engineering. Added the below point based on observation Look for combinations with Informatica, IICS , Python,(If not Informatica we can submit with Talend,) Pyspark, SQL , Step Functions, Lambda & EMR on high level exp - Location Hyderabad Key responsibilities and accountabilities Design, build and maintain complex ELT/ETL jobs that deliver business value. Extract, transform and load data from various sources including databases, APIs, and flat files using IICS or Python/SQL. Translate high-level business requirements into technical specs Conduct unit testing, integration testing, and system testing of data integration solutions to ensure accuracy and quality Ingest data from disparate sources into the data lake and data warehouse Cleanse and enrich data and apply adequate data quality controls Provide technical expertise and guidance to team members on Informatica IICS/IDMC and data engineering best practices to guide the future development of MassMutuals Data Platform. Develop re-usable tools to help streamline the delivery of new projects Collaborate closely with other developers and provide mentorship Evaluate and recommend tools, technologies, processes and reference Architectures Work in an Agile development environment, attending daily stand-up meetings and delivering incremental improvements Participate in code reviews and ensure all solutions are lined to architectural and requirement specifications and provide feedback on code quality, design, and performance Knowledge, skills and abilities Please refer ‘Education and Experience’ Education and experience Bachelor’s degree in computer science, engineering, or a related field. Master’s degree preferred. Data: 5+ years of experience with data analytics and data warehousing. Sound knowledge of data warehousing concepts. SQL: 5+ years of hands-on experience on SQL and query optimization for data pipelines. ELT/ETL: 5+ years of experience in Informatica/ 3+ years of experience in IICS/IDMC Migration Experience: Experience Informatica on prem to IICS/IDMC migration Cloud: 5+ years’ experience working in AWS cloud environment Python: 5+ years of hands-on experience of development with Python Workflow: 4+ years of experience in orchestration and scheduling tools (e.g. Apache Airflow) Advanced Data Processing: Experience using data processing technologies such as Apache Spark or Kafka Troubleshooting: Experience with troubleshooting and root cause analysis to determine and remediate potential issues Communication: Excellent communication, problem-solving and organizational and analytical skills Able to work independently and to provide leadership to small teams of developers. Reporting: Experience with data reporting (e.g. MicroStrategy, Tableau, Looker) and data cataloging tools (e.g. Alation) Experience in Design and Implementation of ETL solutions with effective design and optimized performance, ETL Development with industry standard recommendations for jobs recovery, fail over, logging, alerting mechanisms. Specify the minimum acceptable level of education, experience, certifications necessary for the role Application Requirements No special requirements Support Hours India GCC – US (EST) hours overlap for 2-3 hours

Posted 2 months ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Grade : 7 Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance Optimization:Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving:Leadership experience in problem-solving and technical decision-making. Communication:Strong in strategic communication and stakeholder engagement. Project Management:Experienced in overseeing project lifecycles working with Project Managers to manage resources.

Posted 2 months ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Cloud Architect to design and oversee scalable, secure, and cost-efficient cloud solutions. Great for architects who bridge technical vision with business needs. Key Responsibilities: Design cloud-native solutions using AWS, Azure, or GCP Lead cloud migration and transformation projects Define cloud governance, cost control, and security strategies Collaborate with DevOps and engineering teams for implementation Required Skills & Qualifications: Deep expertise in cloud architecture and multi-cloud environments Experience with containers, serverless, and microservices Proficiency in Terraform, CloudFormation, or equivalent Bonus: Cloud certification (AWS/Azure/GCP Architect) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 2 months ago

Apply

7.0 - 10.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Role Overview We are seeking an experienced Data Engineer with 7-10 years of experience to design, develop, and optimize data pipelines while integrating machine learning (ML) capabilities into production workflows. The ideal candidate will have a strong background in data engineering, big data technologies, cloud platforms, and ML model deployment. This role requires expertise in building scalable data architectures, processing large datasets, and supporting machine learning operations (MLOps) to enable data-driven decision-making. Key Responsibilities Data Engineering & Pipeline Development Design, develop, and maintain scalable, robust, and efficient data pipelines for batch and real-time data processing. Build and optimize ETL/ELT workflows to extract, transform, and load structured and unstructured data from multiple sources. Work with distributed data processing frameworks like Apache Spark, Hadoop, or Dask for large-scale data processing. Ensure data integrity, quality, and security across the data pipelines. Implement data governance, cataloging, and lineage tracking using appropriate tools. Machine Learning Integration Collaborate with data scientists to deploy, monitor, and optimize ML models in production. Design and implement feature engineering pipelines to improve model performance. Build and maintain MLOps workflows, including model versioning, retraining, and performance tracking. Optimize ML model inference for low-latency and high-throughput applications. Work with ML frameworks such as TensorFlow, PyTorch, Scikit-learn, and deployment tools like Kubeflow, MLflow, or SageMaker. Cloud & Big Data Technologies Architect and manage cloud-based data solutions using AWS, Azure, or GCP. Utilize serverless computing (AWS Lambda, Azure Functions) and containerization (Docker, Kubernetes) for scalable deployment. Work with data lakehouses (Delta Lake, Iceberg, Hudi) for efficient storage and retrieval. Database & Storage Management Design and optimize relational (PostgreSQL, MySQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) databases. Manage and optimize data warehouses (Snowflake, BigQuery, Redshift, Databricks) for analytical workloads. Implement data partitioning, indexing, and query optimizations for performance improvements. Collaboration & Best Practices Work closely with data scientists, software engineers, and DevOps teams to develop scalable and reusable data solutions. Implement CI/CD pipelines for automated testing, deployment, and monitoring of data workflows. Follow best practices in software engineering, data modeling, and documentation. Continuously improve the data infrastructure by researching and adopting new technologies. Required Skills & Qualifications Technical Skills: Programming Languages: Python, SQL, Scala, Java Big Data Technologies: Apache Spark, Hadoop, Dask, Kafka Cloud Platforms: AWS (Glue, S3, EMR, Lambda), Azure (Data Factory, Synapse), GCP (BigQuery, Dataflow) Data Warehousing: Snowflake, Redshift, BigQuery, Databricks Databases: PostgreSQL, MySQL, MongoDB, Cassandra ETL/ELT Tools: Airflow, dbt, Talend, Informatica Machine Learning Tools: MLflow, Kubeflow, TensorFlow, PyTorch, Scikit-learn MLOps & Model Deployment: Docker, Kubernetes, SageMaker, Vertex AI DevOps & CI/CD: Git, Jenkins, Terraform, CloudFormation Soft Skills: Strong analytical and problem-solving abilities. Excellent collaboration and communication skills. Ability to work in an agile and cross-functional team environment. Strong documentation and technical writing skills. Preferred Qualifications Experience with real-time streaming solutions like Apache Flink or Spark Streaming. Hands-on experience with vector databases and embeddings for ML-powered applications. Knowledge of data security, privacy, and compliance frameworks (GDPR, HIPAA). Experience with GraphQL and REST API development for data services. Understanding of LLMs and AI-driven data analytics.

Posted 2 months ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Job Role Big Data Engineer Work Location Bangalore (CV Ramen Nagar location) Experience 7+ Years Notice Period Immediate - 30 days Mandatory Skills Big Data, Python, SQL, Spark/Pyspark, AWS Cloud JD and required Skills & Responsibilities - Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. - Solve complex business problems by utilizing a disciplined development methodology. - Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. - Analyse the source and target system data. Map the transformation that meets the requirements. - Interact with the client and onsite coordinators during different phases of a project. - Design and implement product features in collaboration with business and Technology stakeholders. - Anticipate, identify, and solve issues concerning data management to improve data quality. - Clean, prepare, and optimize data at scale for ingestion and consumption. - Support the implementation of new data management projects and re-structure the current data architecture. - Implement automated workflows and routines using workflow scheduling tools. - Understand and use continuous integration, test-driven development, and production deployment frameworks. - Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards. - Analyze and profile data for the purpose of designing scalable solutions. - Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues. Required Skills - 5+ years of relevant experience developing Data and analytic solutions. - Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark - Experience with relational SQL. - Experience with scripting languages such as Python. - Experience with source control tools such as GitHub and related dev process. - Experience with workflow scheduling tools such as Airflow. - In-depth knowledge of AWS Cloud (S3, EMR, Databricks) - Has a passion for data solutions. - Has a strong problem-solving and analytical mindset - Working experience in the design, Development, and test of data pipelines. - Experience working with Agile Teams. - Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders - Able to quickly pick up new programming languages, technologies, and frameworks. - Bachelor's degree in computer science Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 2 months ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Noida

Work from Office

Looking for a dynamic sales personnel to sell our ERP and EMR solutions. Requirements: 5+ years in sales especially selling software solution preferably ERP/EMR or others Experience in Manufacturing sectors for ERP and Hospitals/Clinics for EMR solution will be given preference Must be ready to travel all across NCR and north western states Good communication and computer skills Ability to close deals

Posted 2 months ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, Looking for a Cloud Data Engineer to build cloud-based data pipelines and analytics platforms. Key Responsibilities: Develop ETL workflows using cloud data services. Manage data storage, lakes, and warehouses. Ensure data quality and pipeline reliability. Required Skills & Qualifications: Experience with BigQuery, Redshift, or Azure Synapse. Proficiency in SQL, Python, or Spark. Familiarity with data lake architecture and batch/streaming. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 2 months ago

Apply

4.0 - 9.0 years

9 - 19 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Greetings from HCL! Currently Hiring for "RIS PACS" JD: Familiar connect with radiologist Skill : RIS , VNA, Enterprise Imaging, HL7, DICOM , PACS Dictation/Speech Recognition software, Image Exchange software, Intelerad PACS , Fuji EIS Powerscribe360 Good knowledge of RISPACSDICOM/HL7 standards Experience - 3-12 years Location - Bangalore / Chennai / Noida / Pune / Hyderabad Notice period - Immediate to 30 days Only CTC - Can be discussed Interested candidate please share below details along with updated resume Candidate Name- contact Number- Email ID- Total Experience- Relevant Experience- Skill- Current company- Preferred location- Notice Period- Current CTC- Expected CTC- Interested candidate please drop mail to "kushmathattanda.baby@hcltech.com" Regards, Kushma kushmathattanda.baby@hcltech.com

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies