Jobs
Interviews

521 Emr Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

18.0 - 23.0 years

15 - 20 Lacs

Pune

Work from Office

Project Role : Solution Architect Project Role Description : Translate client requirements into differentiated, deliverable solutions using in-depth knowledge of a technology, function, or platform. Collaborate with the Sales Pursuit and Delivery Teams to develop a winnable and deliverable solution that underpins the client value proposition and business case. Must have skills : Enterprise Architecture Strategy Good to have skills : Enterprise Architecture FrameworkMinimum 18 year(s) of experience is required Educational Qualification : Bachelors or Masters degree in Computer Science Engineering or related field Summary :As a Solution Architect, you will engage in a dynamic and collaborative environment where you will translate client requirements into innovative and practical solutions. Your typical day will involve working closely with various teams, including Sales Pursuit and Delivery, to ensure that the solutions you develop are not only feasible but also align with the client's business objectives. You will leverage your extensive knowledge of technology and platforms to create value-driven propositions that meet client needs effectively. Roles & Responsibilities:- Expected to be a Subject Matter Expert with deep knowledge and experience.- Should have influencing and advisory skills.- Engage with multiple teams and responsible for team decisions.- Expected to provide solutions to problems that apply across multiple teams, and provide solutions to business area problems.- Facilitate workshops and discussions to gather requirements and ensure alignment among stakeholders.- Continuously assess and refine architectural solutions to enhance performance and scalability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Enterprise Architecture Strategy.- Good To Have Skills: Experience with Enterprise Architecture Framework.- Strong understanding of system integration and interoperability.- Experience in developing architectural blueprints and roadmaps.- Ability to analyze and optimize existing systems for improved efficiency. Additional Information:- The candidate should have minimum 18 years of experience in Enterprise Architecture Strategy.- This position is based at our Pune office.- A Bachelors or Masters degree in Computer Science Engineering or related field is required. Qualification Bachelors or Masters degree in Computer Science Engineering or related field

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Work from Office

One Azure backend expert (Strong SC or Specialist Senior) Should have hands-on experience of working with ADLS, ADF and Azure SQL DW Should have minimum 3 Years working experience of delivering Azure projects. Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models.

Posted 1 month ago

Apply

6.0 - 10.0 years

12 - 20 Lacs

Hyderabad

Hybrid

AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar)

Posted 1 month ago

Apply

2.0 - 6.0 years

1 - 6 Lacs

Noida, New Delhi, Delhi / NCR

Work from Office

Need Min 2yrs experience as an AR caller/ Insurance Verification Undergrads/ grads both can apply WFO - 1 side drop - Noida Notice - 0-15 days acceptable AR caller - up to 7 LPA EV caller - up to 6.5 LPA Contact - 9717279212 (Harleen) Required Candidate profile Skills required: Excellent communication EV caller - insurance verification, benefits investigation, etc AR caller - AR follow-ups, Denials, Medical billing, etc . Should be comfortable with a walk-in

Posted 1 month ago

Apply

4.0 - 8.0 years

12 - 18 Lacs

Hyderabad, Chennai, Coimbatore

Hybrid

We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have experience in designing, developing, and maintaining scalable data pipelines and architectures using Hadoop, PySpark, ETL processes , and Cloud technologies . Responsibilities: Design, develop, and maintain data pipelines for processing large-scale datasets. Build efficient ETL workflows to transform and integrate data from multiple sources. Develop and optimize Hadoop and PySpark applications for data processing. Ensure data quality, governance, and security standards are met across systems. Implement and manage Cloud-based data solutions (AWS, Azure, or GCP). Collaborate with data scientists and analysts to support business intelligence initiatives. Troubleshoot performance issues and optimize query executions in big data environments. Stay updated with industry trends and advancements in big data and cloud technologies . Required Skills: Strong programming skills in Python, Scala, or Java . Hands-on experience with Hadoop ecosystem (HDFS, Hive, Spark, etc.). Expertise in PySpark for distributed data processing. Proficiency in ETL tools and workflows (SSIS, Apache Nifi, or custom pipelines). Experience with Cloud platforms (AWS, Azure, GCP) and their data-related services. Knowledge of SQL and NoSQL databases. Familiarity with data warehousing concepts and data modeling techniques. Strong analytical and problem-solving skills. Interested can reach us at +91 7305206696/ saranyadevib@talentien.com

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Work from Office

Strong experience with Python, SQL, pySpark, AWS Glue. Good to have - Shell Scripting, Kafka Good knowledge of DevOps pipeline usage (Jenkins, Bitbucket, EKS, Lightspeed) Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Orchestration using Airflow Good to have - Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming Good debugging skills Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehouse architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Experience in Insurance domain preferred.

Posted 1 month ago

Apply

2.0 - 6.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities: Gather and analyze requirements for clinical data conversion projects Collaborate with clients and vendors to define project scope, timelines, and deliverables Prepare and transform clinical data for conversion activities Address and resolve data-related issues reported by clients Develop and maintain documentation and specifications for data conversion processes Monitor project progress and ensure timely completion of milestones Troubleshoot common database issues and provide technical support Ensure compliance with US healthcare regulations and standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Familiarity with US healthcare systems and regulations Knowledge of standard EHR/EMR clinical data workflows Understanding of healthcare clinical dictionaries Proficiency in EHR database architecture and data extraction/transformation using MS SQL Server Solid knowledge of stored procedures, triggers, and functions Proven excellent problem-solving and troubleshooting skills Solid communication and collaboration abilities

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

job requisition idJR1027452 Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software : Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.

Posted 1 month ago

Apply

8.0 - 13.0 years

18 - 33 Lacs

Bengaluru

Hybrid

Warm Greetings from SP Staffing!! Role: AWS Data Engineer Experience Required :8 to 15 yrs Work Location :Bangalore Required Skills, Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 27 Lacs

Bengaluru

Hybrid

Labcorp is hiring a Senior Data engineer. This person will be an integrated member of Labcorp Data and Analytics team and work within the IT team. Play a crucial role in designing, developing and maintaining data solutions using Databricks, Fabric, Spark, PySpark and Python. Responsible to review business requests and translate them into technical solution and technical specification. In addition, work with team members to mentor fellow developers to grow their knowledge and expertise. Work in a fast paced and high-volume processing environment, where quality and attention to detail are vital. RESPONSIBILITIES: Design and implement end-to-end data engineering solutions by leveraging the full suite of Databricks, Fabric tools, including data ingestion, transformation, and modeling. Design, develop and maintain end-to-end data pipelines by using spark, ensuring scalability, reliability, and cost optimized solutions. Conduct performance tuning and troubleshooting to identify and resolve any issues. Implement data governance and security best practices, including role-based access control, encryption, and auditing. Work in fast-paced environment and perform effectively in an agile development environment. REQUIREMENTS: 8+ years of experience in designing and implementing data solutions with at least 4+ years of experience in data engineering. Extensive experience with Databricks, Fabric, including a deep understanding of its architecture, data modeling, and real-time analytics. Minimum 6+ years of experience in Spark, PySpark and Python. Must have strong experience in SQL, Spark SQL, data modeling & RDBMS concepts. Strong knowledge of Data Fabric services, particularly Data engineering, Data warehouse, Data factory, and Real- time intelligence. Strong problem-solving skills, with ability to perform multi-tasking. Familiarity with security best practices in cloud environments, Active Directory, encryption, and data privacy compliance. Communicate effectively in both oral and written. Experience in AGILE development, SCRUM and Application Lifecycle Management (ALM). Preference given to current or former Labcorp employees. EDUCATION: Bachelors in engineering, MCA.

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 17 Lacs

Pune

Remote

Were Hiring! | Senior Data Engineer (Remote) Location: Remote | Shift: US - CST Time | Department: Data Engineering Are you a data powerhouse who thrives on solving complex data challenges? Do you love working with Python, AWS, and cutting-edge data tools? If yes, Atidiv wants YOU! Were looking for a Senior Data Engineer to build and scale data pipelines, transform how we manage data lakes and warehouses, and power real-time data experiences across our products. What Youll Do: Architect and develop robust, scalable data pipelines using Python & PySpark Drive real-time & batch data ingestion from diverse data sources Build and manage data lakes and data warehouses using AWS (S3, Glue, Redshift, EMR, Lambda, Kinesis) Write high-performance SQL queries and optimize ETL/ELT jobs Collaborate with data scientists, analysts, and engineers to ensure high data quality and availability Implement monitoring, logging & alerting for workflows Ensure top-tier data security, compliance & governance What We’re Looking For: 5+ years of hands-on experience in Data Engineering Strong skills in Python, DBT, SQL , and working with Snowflake Proven experience with Airflow, Kafka/Kinesis , and AWS ecosystem Deep understanding of CI/CD practices Passion for clean code, automation , and scalable systems Why Join Atidiv? 100% Remote | Flexible Work Culture Opportunity to work with cutting-edge technologies Collaborative, supportive team that values innovation and ownership Work on high-impact, global projects Ready to transform data into impact? Send your resume to: nitish.pati@atidiv.com

Posted 1 month ago

Apply

4.0 - 9.0 years

3 - 7 Lacs

Navi Mumbai

Work from Office

Title Our corporate activities are growing rapidly, and we are currently seeking a full-time, office based PACS Admin to join our Imaging team in Mumbai. This position will work on a team to accomplish tasks and projects that are instrumental to the company’s success. If you want an exciting career where you use your previous expertise and can develop and grow your career even further, then this is the opportunity for you. Overview Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries. Responsibilities Serve as the project lead for implementing imaging and ECG applications. Administer and support PACS functionalities including 'Image workflow management', 'Image data archiving', DICOM Communication' and other related PACS activities. Ensure medical imaging system design, interface functionality, and clinical processes are coordinated and functioning effectively. Perform medical imaging and ECG commercial off-the-shelf applications maintenance and testing. Deliver hands-on training for medical imaging applications to internal and external users. Participate in the testing and implementation of clinical applications where medical imaging applications integrate with those clinical applications. Work with end users (internal and external) as a subject matter expert on medical imaging applications to ensure users can access workstations and images. Promote medical imaging application security and confidentiality and help ensure compliance. Coordinate with Medpace IT for any system requirement, security and maintenance as needed. Provide DICOM standard guideline and de-identification best practice to operation and system development team. Qualifications Bachelor's degree in information technology or equivalent, and 4+ years of related experience (Healthcare IT is a plus) Basic knowledge of DICOM standard and DICOM communication Competent in installation and troubleshooting of software. Capable and willing to continuously and rapidly self-learn new technology. People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today. The work we’ve done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future. Medpace Perks Flexible work environment Competitive compensation and benefits package Competitive PTO packages Structured career paths with opportunities for professional growth Company-sponsored employee appreciation events Employee health and wellness initiatives Awards Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024 Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility What to Expect Next A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps.

Posted 1 month ago

Apply

5.0 - 8.0 years

6 - 11 Lacs

Mumbai, Hyderabad

Work from Office

About the Role: Grade Level (for internal use): 10 The Role Senor Software Software Developer The Team You will be part of global technology team and will be responsible for analysis, design, development of Multi Asset solution. The Impact You will be working on one of the core technology platforms responsible for the intraday & end of day calculation as well as dissemination of index values. Whats in it for you You will have the opportunity to work on the enhancements to the existing index calculation system as well as implement new methodologies as required. Responsibilities Design and development of applications for S&P Multi Asset indices. Participate in multiple software development processes including Development, Testing, Debugging, Documentation and Support. Develop software applications based on iterative business specifications. Work on new initiatives and support existing Index applications. Perform application & system performance tuning. Build applications with object-oriented concepts and apply design patterns. Integrate in-house applications with various vendor software platforms. Check-in application code changes into the source repository. Perform unit testing of application code and fix errors. Interface with databases to extract information and build reports. Effectively interact with customers, business users and IT staff. What were looking for Basic Qualification Bachelor's degree in Computer Science, Information Systems or Engineering is required, or in lieu, a demonstrated equivalence in work experience. 5 to 8 years of IT experience in application development and support. Experience with User Interface design & development using Angular (Preferably Angular 18), HTML5 & CSS Experience in Rest Services Exposure with Java , J2EE, JMS Experience with Spring framework. Experience in ActiveMQ or any other related messaging provider. Experience in Apache Spark or EMR Experience in Oracle Database environment SQL, PL/SQL programming. Good to have experience in Python Experience with UNIX/Linux Operating System with good knowledge of basic commands Understanding of cloud providers AWS, Azure Experience using system tools, source control systems like Git/SVN, utilities and third-party products. Experience working with large datasets in Equity, Commodities, Forex, Futures and Options asset classes. Experience with Index/Benchmarks or Asset Management or trading platforms is a plus. Excellent communication and interpersonal skills are essential, with strong verbal and writing proficiencies. About S&P Global Dow Jones Indic e s At S&P Dow Jones Indices, we provide iconic and innovative index solutions backed by unparalleled expertise across the asset-class spectrum. By bringing transparency to the global capital markets, we empower investors everywhere to make decisions with conviction. Were the largest global resource for index-based concepts, data and research, and home to iconic financial market indicators, such as the S&P 500 and the Dow Jones Industrial Average . More assets are invested in products based upon our indices than any other index provider in the world. With over USD 7.4 trillion in passively managed assets linked to our indices and over USD 11.3 trillion benchmarked to our indices, our solutions are widely considered indispensable in tracking market performance, evaluating portfolios and developing investment strategies. S&P Dow Jones Indices is a division of S&P Global (NYSESPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/spdji . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries

Posted 1 month ago

Apply

8.0 - 10.0 years

4 - 8 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Req ID: 312221 We are currently seeking a nCino BA to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Summary of role We are seeking a knowledgeable and experienced nCino Business Consultant/SME to join our team. In this role, you will play a main role in implementing nCino"™ s banking software solutions for one of our clients, with focus on the Origination space. The ideal candidate will have a strong background in financial services and good expertise in leveraging nCino"™ s platform to streamline processes and optimize operations for our client. We"™re looking for someone who will: Be the primary point of contact to various stakeholders, including but not limited to Business stakeholders, Delivery squads, Design & Development teams. Collaborate with business Stakeholders to understand their business requirements and objectives Ensure that the requirements are clearly defined, well understood, and properly documented in the form of epics/User stories and signed off by all relevant stakeholders. Provide business/functional expertise in nCino functionality in reference the client requirements, with primary focus in the loan origination space Collaborate with internal teams (e.g., Developers , other Delivery squads ,Product Owner(s)) to ensure successful implementation and integration of nCino with other systems . Support Design/Development teams in Configuring and customizing nCino"™ s platform to meet client needs and enhance user experience. Prepare Training materials/Operating manuals for business users on the overall business flow in the new system/architecture where the functionalities are developed/delivered Support SMEs/Business Analysts of other delivery Squads by providing timely inputs on nCino deliverables/functionalities whenever these is foreseen to have impact on other system functionalities/deliverables You will need to show us that You demonstrate Business/functional expertise in the corporate lending space You have worked on nCino platform, delivering major functionalities in the lending domain with major contributions in the Origination space You will use your experience to make informed decisions and prioritize requirements, validate them with business stakeholders. You can take right decisions on what requirements to include in each sprint or release, based on the value they will deliver to business needs. You can closely work with the Product Owner, Design Team and the delivery squad to resolve design challenges and facilitate providing workaround solutions wherever needed. You are a professional with strong Banking domain expertise acumen who will connect easily with the business needs/goals and support in delivery of the same in nCino platform. Experience and Skills required "“ 8-10 years of overall experience with a minimum of 3+ years as SME/BA in nCino implementations. Well versed with Agile methodologies and Ceremonies. Good experience in managing Business stakeholder and their requirements. Support Design/Delivery teams from requirement finalisation till delivery of the solution. Track record of delivering complex requirements in projects involving nCino Exceptional Communicational Skills (both verbal and written). Prior Banking experience in Ireland/UK markets will be an added advantage. Location - Bengaluru,Chennai,Hyderabad,Mumbai,Noida

Posted 1 month ago

Apply

2.0 - 6.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Req ID: 324087 We are currently seeking a Digital Solution Arch. Strategic Advisor to join our team in Bengaluru, India, Karntaka (IN-KA), India (IN). Mandatory Qualifications Deep domain expertise in Healthcare (Providers, Payers, PBMs, HealthTech). Strong exposure to HL7, FHIR, EHR/EMR platforms (Epic, Cerner, Meditech), payer platforms, CRM and engagement platforms. Application modernization experience (legacy to cloud-native, platform reengineering). Presales leadership in solutioning multi-million dollar healthcare technology deals. Excellent communication and executive presentation skills. Preferred Qualifications Certifications like AWS Certified Solutions Architect "“ Professional, Azure Solutions Architect, or Healthcare IT certifications. Knowledge of Value-Based Care models, Healthcare AI/ML applications, and Health Information Exchanges (HIEs).

Posted 1 month ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Gurugram

Work from Office

We are looking for a dynamic and self-motivated professional with hands-on experience in EMR and clinical implementations across multispecialty hospitals. The role involves end-to-end project execution from requirement gathering to user training, UAT, go-live, and post-implementation support. The ideal candidate should be capable of working independently, managing multiple priorities in a fast-paced setting, and be open to extensive travel. Role & responsibilities Hands-on in EMR, clinical Implementations in multispecialty hospitals. Drive implementation of the product in hospitals starting from Requirement gathering to User Acceptance testing, Go-live, and post-go-live support. Provide User Training and support hospitals to ensure sign-off at each milestone as per the project plan. Ability to work independently and balance multiple priorities in a fast-paced environment. Should be open to extensive travel. Experience - 2 to 5 years Education - BDS/BHMS/MBBS/Postgraduate in Hospital Management Location - Gurgaon

Posted 1 month ago

Apply

3.0 - 6.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform.

Posted 1 month ago

Apply

3.0 - 6.0 years

2 - 6 Lacs

Chennai

Work from Office

AWS Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills- Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift.

Posted 1 month ago

Apply

3.0 - 5.0 years

4 - 8 Lacs

Pune

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 1 month ago

Apply

8.0 - 12.0 years

4 - 8 Lacs

Pune

Work from Office

Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks Location:Pune/ Mumbai/ Bangalore/ Chennai

Posted 1 month ago

Apply

8.0 - 13.0 years

1 - 4 Lacs

Pune

Work from Office

Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage.

Posted 1 month ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.

Posted 1 month ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 18 Lacs

Hyderabad, Bengaluru

Hybrid

Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts: Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift.

Posted 1 month ago

Apply

8.0 - 13.0 years

9 - 14 Lacs

Bengaluru

Work from Office

8+ years experience combined between backend and data platform engineering roles Worked on large scale distributed systems. 5+ years of experience building data platform with (one of) Apache Spark, Flink or with similar frameworks. 7+ years of experience programming with Java Experience building large scale data/event pipelines Experience with relational SQL and NoSQL databases, including Postgres/MySQL, Cassandra, MongoDB Demonstrated experience with EKS, EMR, S3, IAM, KDA, Athena, Lambda, Networking, elastic cache and other AWS services.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies