Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
7 - 14 Lacs
Pune, Mumbai (All Areas)
Hybrid
Job Title: Lead Data Engineer Location: Mumbai / Pune Experience: 8+ yrs Job Summary: We are seeking a technically strong and delivery-focused Lead Engineer to support and enhance enterprise-grade data and application products under the Durables model. The ideal candidate will act as the primary technical interface for the client, ensuring high system availability, performance, and continuous improvement. This role requires a hands-on technologist with strong team management experience, cloud (AWS) expertise, and excellent communication skills to handle client interactions and drive technical decisions. Key Responsibilities: Support & Enhancement Leadership Act as the primary technical lead for support and enhancement of assigned products in the Durable portfolio. Ensure incident resolution, problem management, and enhancement delivery within agreed SLAs. Perform root cause analysis (RCA) and provide technical solutions to recurring issues. Should design data engineering solutions end to end. Ability to come up with scalable and modular solutions. Experience working in Agile implementations. Technical Ownership Provide technical direction and architectural guidance for improvements, optimizations, and issue resolutions. Drive best practices in code performance tuning, ETL processing, and cloud-native data management. Lead the modernization of legacy data pipelines and applications by leveraging AWS services (Glue, Lambda, Redshift, S3, EMR, Athena, etc.). Leverage PySpark, SQL, Python, and other tools to manage big data processing pipelines efficiently. Client Engagement Maintain high visibility with client stakeholders, act as a trusted technical advisor. Proactively identify and suggest improvements or innovation areas to improve business outcomes. Participate in daily stand-ups, retrospectives, and client presentations; communicate technical concepts clearly. Team & Delivery Management Lead a cross-functional team of engineers, ensuring effective task allocation, mentorship, and upskilling. Monitor team performance, support capacity planning, and ensure timely and high-quality deliveries. Ensure adherence to governance, documentation, change management practices and High Availability. Process & Quality Assurance Implement and ensure compliance with engineering best practices including CI/CD, version control, and automated testing. Define support procedures, documentation standards, and ensure knowledge transition and retention within the team. Identify risk areas, dependencies, and propose mitigation strategies. Required Skills & Qualifications: 8+ years of experience in Data Engineering / Application Support and Development. 4+ Strong hands-on expertise in AWS ecosystem (Glue, Lambda, Redshift, S3, Athena, EMR, CloudWatch). Proficiency in PySpark, SQL, Python, and handling big data pipelines. Strong application debugging skills across batch and near real-time systems. Good knowledge of incident lifecycle management, RCA, and performance optimization. Proven experience leading engineering teams (3+ years), preferably in support/enhancement environments. Excellent communication skills with proven client-facing capabilities. Strong documentation and process adherence mindset. Experience with tools like JIRA, Confluence, Git, Jenkins, or any CI/CD pipeline. Good to Have: Experience in on-prem to AWS migration projects. Familiarity with legacy tech and their interaction with modern cloud stacks. Good knowledge of designing Hive tables with partitioning for performance.
Posted 2 weeks ago
1.0 - 3.0 years
15 - 30 Lacs
Bengaluru
Work from Office
About the Role Does digging deep for data and turning it into useful, impactful insights get you excited? Then you could be our next SDE II Data-Real Time Streaming. In this role, youll oversee your entire teams work, ensuring that each individual is working towards achieving their personal goals and Meeshos organisational goals. Moreover, youll keep an eye on all engineering projects and ensure the team is not straying from the right track. Youll also be tasked with directing programming activities, evaluating system performance, and designing new programs and features for smooth functioning. What you will do Build a platform for ingesting and processing multi terabytes of data daily Curate, build and transform raw data into scalable information Create prototypes and proofs-of-concept for iterative development Reduce technical debt with quality coding Keep a closer look at various projects and monitor the progress Carry on smooth collaborations with the sales team and engineering teams Provide management mentorship which sets the tone for holistic growth Ensure everyone is on the same page and taking ownership of the project What you will need Bachelors/Masters degree in Computer Science At least 1 to 3 years of professional experience Exceptional coding skills using Java, Scala, Python Working knowledge of Redis, MySQL and messaging systems like Kafka Knowledge of RxJava, Java Springboot, Microservices architecture Hands-on experience with the distributed systems architecture dealing with high throughput. Experience in building streaming and real-time solutions using Apache Flink/Spark Streaming/Samza. Familiarity with software engineering best practices across all stages of software development Expertise in Data system internalsStrong problem-solving and analytical skills Familiarity with Big Data systems (Spark/EMR, Hive/Impala, Delta Lake, Presto, Airflow, Data Lineage) is an advantage Familiarity with data modeling, end-to-end data pipelining, OLAP data cubes and BI tools is a plus Experience as a contributor/committer to the Big data stack is a plus Having been a contributor/committer to the big data stack Data modeling experience and end-to-end data pipelining experience is a plus Brownie points for knowledge of OLAP data cubes and BI tools
Posted 2 weeks ago
2.0 - 5.0 years
3 - 6 Lacs
Chennai
Work from Office
Job Title: Healthcare Implementation and Support Specialist Job Summary: We are seeking an experienced Healthcare Implementation and Support Specialist to join our team. The successful candidate will provide technical assistance, support, and implementation services to healthcare customers, ensuring successful adoption and utilization of our healthcare software solutions. Key Responsibilities: Implement healthcare software solutions, including setup, configuration, and testing Provide technical support and assistance to healthcare customers via phone, email, or on-site visits Troubleshoot and resolve technical issues, data integration, and workflow optimization Collaborate with internal teams to resolve complex issues and improve customer satisfaction Develop and maintain knowledge of healthcare industry regulations, standards, and best practices (e.g., HIPAA, ICD-10, CPT) Conduct training sessions and create documentation to support customer onboarding and adoption Identify opportunities to improve implementation processes and suggest improvements Requirements: 3+ years of experience in healthcare IT implementation and support Minimum 25 years of experience in HIS/EMR implementation or support. Bachelor's degree in Computer Science, Health Informatics, or related field. Strong understanding of healthcare operations, clinical workflows, and medical terminology Excellent problem-solving skills, attention to detail, and communication skills Experience with electronic health records (EHRs), practice management systems (PMS), and other healthcare software solutions Ability to work in a fast-paced environment, prioritize multiple tasks, and meet deadlines
Posted 2 weeks ago
7.0 - 12.0 years
0 - 0 Lacs
Chennai
Work from Office
Who we are looking for: The data engineering team's mission is to provide high availability and high resiliency as a core service to our ACV applications. The team is responsible for ETL’s using different ingestion and transformation techniques. We are responsible for a range of critical tasks aimed at ensuring smooth and efficient functioning and high availability of ACVs data platforms. We are a crucial bridge between Infrastructure Operations, Data Infrastructure, Analytics, and Development teams providing valuable feedback and insights to continuously improve platform reliability, functionality, and overall performance. We are seeking a talented data professional as a Senior Data Engineer to join our Data Engineering team. This role requires a strong focus and experience in software development, multi-cloud based technologies, in memory data stores, and a strong desire to learn complex systems and new technologies. It requires a sound foundation in database and infrastructure architecture, deep technical knowledge, software development, excellent communication skills, and an action-based philosophy to solve hard software engineering problems. What you will do: As a Data Engineer at ACV Auctions you HAVE FUN !! You will design, develop, write, and modify code. You will be responsible for development of ETLs, application architecture, optimizing databases & SQL queries. You will work alongside other data engineers and data scientists in the design and development of solutions to ACV’s most complex software problems. It is expected that you will be able to operate in a high performing team, that you can balance high quality delivery with customer focus, and that you will have a record of delivering and guiding team members in a fast-paced environment. Design, develop, and maintain scalable ETL pipelines using Python and SQL to ingest, process, and transform data from diverse sources. Write clean, efficient, and well-documented code in Python and SQL. Utilize Git for version control and collaborate effectively with other engineers. Implement and manage data orchestration workflows using industry-standard orchestration tools (e.g., Apache Airflow, Prefect).. Apply a strong understanding of major data structures (arrays, dictionaries, strings, trees, nodes, graphs, linked lists) to optimize data processing and storage. Support multi-cloud application development. Contribute, influence, and set standards for all technical aspects of a product or service including but not limited to, testing, debugging, performance, and languages. Support development stages for application development and data science teams, emphasizing in MySQL and Postgres database development. Influence company wide engineering standards for tooling, languages, and build systems. Leverage monitoring tools to ensure high performance and availability; work with operations and engineering to improve as required. Ensure that data development meets company standards for readability, reliability, and performance. Collaborate with internal teams on transactional and analytical schema design. Conduct code reviews, develop high-quality documentation, and build robust test suites Respond-to and troubleshoot highly complex problems quickly, efficiently, and effectively. Mentor junior data engineers. Assist/lead technical discussions/innovation including engineering tech talks Assist in engineering innovations including discovery of new technologies, implementation strategies, and architectural improvements. Participate in on-call rotation What you will need: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience) Ability to read, write, speak, and understand English. 4+ years of experience programming in Python 3+ years of experience with ETL workflow implementation (Airflow, Python) 3+ years work with continuous integration and build tools. 3+ years of experience with Cloud platforms preferably in AWS or GCP Knowledge of database architecture, infrastructure, performance tuning, and optimization techniques. Deep Knowledge in day-day tools and how they work including deployments, k8s, monitoring systems, and testing tools. Proficient in databases (RDB), SQL, and can contribute to schema definitions. Self-sufficient debugger who can identify and solve complex problems in code. Deep understanding of major data structures (arrays, dictionaries, strings). Experience with Domain Driven Design. Experience with containers and Kubernetes. Experience with database monitoring and diagnostic tools, preferably Data Dog. Hands-on skills and the ability to drill deep into the complex system design and implementation. Proficiency in SQL query writing and optimization. Experience with database security principles and best practices. Experience with in-memory data processing Experience working with data warehousing concepts and technologies, including dimensional modeling and ETL frameworks Strong communication and collaboration skills, with the ability to work effectively in a fast paced global team environment. Experience working with: SQL data-layer development experience; OLTP schema design Using and integrating with cloud services, specifically: AWS RDS, Aurora, S3, GCP Github, Jenkins, Python, Docker, Kubernetes Nice to Have Qualifications Experience with Airflow, Docker, Visual Studio, Pycharm, Redis, Kubernetes, Fivetran, Spark, Dataflow, Dataproc, EMR Experience with database monitoring and diagnostic tools, preferably DataDog Hands-on experience with Kafka or other event streaming technologies. Hands-on experience with micro-service architecture
Posted 2 weeks ago
3.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
About the Opportunity Job TypeApplication 31 July 2025 Strategic Impact As a Senior Data Engineer, you will directly contribute to our key organizational objectives: Accelerated Innovation Enable rapid development and deployment of data-driven products through scalable, cloud-native architectures Empower analytics and data science teams with self-service, real-time, and high-quality data access Shorten time-to-insight by automating data ingestion, transformation, and delivery pipelines Cost Optimization Reduce infrastructure costs by leveraging serverless, pay-as-you-go, and managed cloud services (e.g., AWS Glue, Databricks, Snowflake) Minimize manual intervention through orchestration, monitoring, and automated recovery of data workflows Optimize storage and compute usage with efficient data partitioning, compression, and lifecycle management Risk Mitigation Improve data governance, lineage, and compliance through metadata management and automated policy enforcement Increase data quality and reliability with robust validation, monitoring, and alerting frameworks Enhance system resilience and scalability by adopting distributed, fault-tolerant architectures Business Enablement Foster cross-functional collaboration by building and maintaining well-documented, discoverable data assets (e.g., data lakes, data warehouses, APIs) Support advanced analytics, machine learning, and AI initiatives by ensuring timely, trusted, and accessible data Drive business agility by enabling rapid experimentation and iteration on new data products and features Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics Be accountable for technical delivery and take ownership of solutions Lead a team of senior and junior developers providing mentorship and guidance Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress Drive technical innovation within the department to increase code reusability, code quality and developer productivity Challenge the status quo by bringing the very latest data engineering practices and techniques About youCore Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance OptimizationExperience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion Experience using orchestration tools (Airflow, Control-M, etc...) Significant experience in software engineering practices using GitHub, code verification, validation, and use of copilots Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes Strong experience in API development using Python based frameworks like FastAPI Key Soft Skills: Problem-SolvingLeadership experience in problem-solving and technical decision-making. CommunicationStrong in strategic communication and stakeholder engagement. Project ManagementExperienced in overseeing project lifecycles working with Project Managers to manage resources.
Posted 2 weeks ago
4.0 - 6.0 years
6 - 16 Lacs
Noida, Gurugram
Hybrid
Job Description: Solicit, review and analyze business requirements Write business and technical requirements Communicate and validate requirements with stakeholders Validate solution meets business needs Work with application users to develop test scripts and facilitate testing to validate application functionality and configuration Participate in organizational projects and/or manage small/medium projects related to assigned applications Translates customer needs into quality system solutions and ensures effective operational outcomes Focus on business value proposition*Apply understanding of 'As Is' and 'To Be' processes to develop solution Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). Role Focus Areas: Core Expertise Required: Provider Management Utilization Management Care Management Domain Knowledge: Value-Based Care Clinical & Care Management Familiarity with Medical Terminology Experience with EMR (Electronic Medical Records) and Claims Processing Technical/Clinical Understanding: Admission & Discharge Processes CPT Codes, Procedure Codes, Diagnosis Codes Job Qualification: Undergraduate degree or equivalent experience. Minimum 5 Years experience in Business Analysis in healthcare including providing overall support, maintenance, configuration, troubleshooting, system upgrades, and more for Healthcare Applications. Good experience on EMR / RCM systems Demonstrated success in running EMR / RCM / UM, CM and DM systems support in requirements, UAT, deployment supports Experience working with stakeholders, gathering requirements, and taking action based on their business needs Proven ability to work independently without direct supervision Proven ability to effectively manage time and competing priorities Proven ability to work with cross-functional teams Core AI Understanding AI/ML Fundamentals: Understanding of supervised, unsupervised, and reinforcement learning. Model Lifecycle Awareness: Familiarity with model training, evaluation, deployment, and monitoring. Data Literacy: Ability to interpret data, understand data quality issues, and collaborate with data scientists. AI Product Strategy AI Use Case Identification: Ability to identify and validate AI opportunities aligned with business goals. Feasibility Assessment: Understanding of whats technically possible with current AI capabilities. AI/ML Roadmapping: Planning features and releases that depend on model development cycles. Collaboration with Technical Teams Cross-functional Communication: Ability to translate business needs into technical requirements and vice versa. Experimentation & A/B Testing: Understanding of how to run and interpret experiments involving AI models. MLOps Awareness: Familiarity with CI/CD for ML, model versioning, and monitoring tools. AI Tools & Platforms Prompt Engineering (for LLMs): Crafting effective prompts for tools like ChatGPT, Copilot, or Claude. Responsible AI & Ethics Bias & Fairness: Understanding of how bias can enter models and how to mitigate it. Explainability: Familiarity with tools like SHAP, LIME, or model cards. Regulatory Awareness: Knowledge of AI-related compliance (e.g., HIPPA, AI Act). AI-Enhanced Product Management AI in SDLC: Using AI tools for user story generation, backlog grooming, and documentation. AI for User Insights: Leveraging NLP for sentiment analysis, user feedback clustering, etc. AI-Driven Personalization: Understanding recommendation systems, dynamic content delivery, etc.
Posted 2 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad, Ahmedabad
Work from Office
Grade Level (for internal use): 10 The Team: We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact: Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. Whats in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of dataideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk, Spark ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications: Bachelor's/Masters Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Kafka, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks , SPARK , Scalatechnologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Spark, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing.
Posted 2 weeks ago
1.0 - 4.0 years
1 - 5 Lacs
Thiruvananthapuram
Work from Office
Maintains a working knowledge of CPT-4, ICD-10-CM and ICD-10-PCS coding principles, governmental regulations, UHDDS (Uniform Hospital Discharge Data Set) guidelines, AHA coding clinic updates and third-party requirements regarding Coding and documentation guidelines Knowledge of Physician query process and ability to write physician query in compliance with OIG and UHDDS regulations Knowledge of MS-DRG (Medicare Severity Diagnosis Related Groups), MDC (Major Diagnostic Categories), AP-DRG (All Patient DRGs), APR-DRG (All Patient Refined DRGs) with hands-on experience in handling MS-DRG Knowledge of CC (complication or comorbidity) and MCC (major complication or comorbidity) when used as a secondary diagnosis Understanding and exposure to Clinical Documentation Improvement (CDI) program to work in tandem with MS-DRG Hands-on experience in any of the Encoder tools specific to Hospital coding such as 3M, Trucode, etc. is preferred The coders assigned on the project would be reviewing Inpatient and observation medical records, determine and assign accurate diagnosis (ICD-10-CM) codes and Procedure codes (ICD-10-PCS and/or CPT) codes with appropriate modifiers in addition to reporting any deviations in a timely manner Maintains high level of productivity and quality Achieve the set targets and cooperate with the respective team in achieving the set Turnaround Time keeping an elevated level of accuracy The coders would as well be screened for reasonable comprehension and analytical skills that are considered a prerequisite for reviewing the medical documentation and deliver accurate coding The coders are expected to deliver an internal accuracy of 95%, meet turnaround time requirements in addition to meeting productivity standards set internally per the specialty Maintains high degree of professional and ethical standards Focuses on continuous improvement by working on projects that enables customers to arrest revenue leakage while being in compliance with the standards. Focuses on updating coding skills and knowledge by participating in coding team meetings and educational conferences. This includes refresher and ongoing training programs conducted periodically within the organization Job REQUIREMENTs To be considered for this position, applicants need to meet the following qualification criteria: Graduates in life sciences with 1 - 4 years experience in Medical Coding Candidates holding CCS/CIC with hospital coding experience are preferable The coders will focus on undergo certifications sponsored by AAPC and AHIMA as they mature with the process. Access health care has now partnered with AAPC to hand hold in-house certification training for its coders and sponsor for the examinations. Good knowledge of medical coding and billing systems, medical terminologies, regulatory requirements, auditing concepts, and principles
Posted 2 weeks ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad
Remote
Role & responsibilities We are seeking a talented and motivated Big Data Developer to design, develop, and maintain large-scale data processing applications. You will work with modern Big Data technologies, leveraging PySpark and Java/Scala, to deliver scalable, high-performance data solutions on AWS. The ideal candidate is skilled in big data frameworks, cloud services, and modern CI/CD practices. Preferred candidate profile Design and develop scalable data processing pipelines using PySpark and Java/Scala. Build and optimize data workflows for batch and real-time data processing. Integrate and manage data solutions on AWS services such as EMR, S3, Glue, Airflow, RDS, and DynamoDB. Implement containerized applications using Docker, Kubernetes, or similar technologies. Develop and maintain APIs and microservices/domain services as part of the data ecosystem. Participate in continuous integration and continuous deployment (CI/CD) processes using Jenkins or similar tools. Optimize and tune performance of Big Data applications and databases (both relational and NoSQL). Collaborate with data architects, data engineers, and business stakeholders to deliver end-to-end data solutions. Ensure best practices in data security, quality, and governance are followed. Must-Have Skills Proficiency with Big Data frameworks and programming using PySpark and Java/Scala Experience designing and building data pipelines for large-scale data processing Solid knowledge of distributed data systems and best practices in performance optimization Preferred Skills Experience with AWS services (EMR, S3, Glue, Airflow, RDS, DynamoDB, or similar) Familiarity with container orchestration tools (Docker, Kubernetes, or similar) Knowledge of CI/CD pipelines (e.g., Jenkins or similar tools) Hands-on experience with relational databases and SQL Experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB) Exposure to microservices or API gateway frameworks Qualifications Bachelors or Master’s degree in Computer Science, Engineering, or a related field 5+ years of experience in Big Data development Strong analytical, problem-solving, and communication skills Experience working in an Agile environment is a plus
Posted 2 weeks ago
6.0 - 11.0 years
3 - 8 Lacs
Pune
Remote
Role & responsibilities What You'll Do Build the underneath data platform and maintain data processing pipelines using best in class technologies. Special focus on R&D to challenge status-quo and build the next generation data mesh that is efficient and cost effective. Translate complex technical and functional requirements into detailed designs. Who You Are Strong programming skills (Python, Java, and Scala) Experience writing SQL , structuring data, and data storage practices Experience with data modeling Knowledge of data warehousing concepts Experienced building data pipelines and micro services Experience with Spark , Airflow and other streaming technologies to process incredible volumes of streaming data A willingness to accept failure, learn and try again An open mind to try solutions that may seem impossible at first Strong understanding of data structures, algorithms, multi-threaded programming, and distributed computing concepts Experience working on Amazon Web Services- AWS ( EMR, Kinesis, RDS, S3 , SQS and the like) Preferred candidate profile At least 6+ years of professional experience as a software engineer or data engineer Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field.
Posted 2 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Hiring for a FAANG company. Note: This position is part of a program designed to support women professionals returning to the workforce after a career break (9+ months career gap) About the Role Join a high-impact global business team that is building cutting-edge B2B technology solutions. As part of a structured returnship program, this role is ideal for experienced professionals re-entering the workforce after a career break. Youll work on mission-critical data infrastructure in one of the worlds largest cloud-based environments, helping transform enterprise procurement through intelligent architecture and scalable analytics. This role merges consumer-grade experience with enterprise-grade features to serve businesses worldwide. Youll collaborate across engineering, sales, marketing, and product teams to deliver scalable solutions that drive measurable value. Key Responsibilities: Design, build, and manage scalable data infrastructure using modern cloud technologies Develop and maintain robust ETL pipelines and data warehouse solutions Partner with stakeholders to define data needs and translate them into actionable solutions Curate and manage large-scale datasets from multiple platforms and systems Ensure high standards for data quality, lineage, security, and governance Enable data access for internal and external users through secure infrastructure Drive insights and decision-making by supporting sales, marketing, and outreach teams with real-time and historical data Work in a high-energy, fast-paced environment that values curiosity, autonomy, and impact Who You Are: 5+ years of experience in data engineering or related technical roles Proficient in SQL and familiar with relational database management Skilled in building and optimizing ETL pipelines Strong understanding of data modeling and warehousing Comfortable working with large-scale data systems and distributed computing Able to work independently, collaborate with cross-functional teams, and communicate clearly Passionate about solving complex problems through data Preferred Qualifications: Hands-on experience with cloud technologies including Redshift, S3, AWS Glue, EMR, Lambda, Kinesis, and Firehose Familiarity with non-relational databases (e.g., object storage, document stores, key-value stores, column-family DBs) Understanding of cloud access control systems such as IAM roles and permissions Returnship Benefits: Dedicated onboarding and mentorship support Flexible work arrangements Opportunity to work on meaningful, global-scale projects while rebuilding your career momentum Supportive team culture that encourages continuous learning and professional development Top 10 Must-Have Skills: SQL ETL Development Data Modeling Cloud Data Warehousing (e.g., Redshift or equivalent) Experience with AWS or similar cloud platforms Working with Large-Scale Datasets Data Governance & Security Awareness Business Communication & Stakeholder Collaboration Automation with Python/Scala (for ETL pipelines) Familiarity with Non-Relational Databases
Posted 3 weeks ago
4.0 - 7.0 years
3 - 6 Lacs
Noida
Work from Office
We are looking for a skilled AWS Data Engineer with 4 to 7 years of experience in data engineering, preferably in the employment firm or recruitment services industry. The ideal candidate should have a strong background in computer science, information systems, or computer engineering. Roles and Responsibility Design and develop solutions based on technical specifications. Translate functional and technical requirements into detailed designs. Work with partners for regular updates, requirement understanding, and design discussions. Lead a team, providing technical/functional support, conducting code reviews, and optimizing code/workflows. Collaborate with cross-functional teams to achieve project goals. Develop and maintain large-scale data pipelines using AWS Cloud platform services stack. Job Strong knowledge of Python/Pyspark programming languages. Experience with AWS Cloud platform services such as S3, EC2, EMR, Lambda, RDS, Dynamo DB, Kinesis, Sagemaker, Athena, etc. Basic SQL knowledge and exposure to data warehousing concepts like Data Warehouse, Data Lake, Dimensions, etc. Excellent communication skills and ability to work in a fast-paced environment. Ability to lead a team and provide technical/functional support. Strong problem-solving skills and attention to detail. A B.E./Master's degree in Computer Science, Information Systems, or Computer Engineering is required. The company offers a dynamic and supportive work environment, with opportunities for professional growth and development. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 3 weeks ago
3.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
We are looking for a skilled Data Engineer with 3 to 6 years of experience in processing data pipelines using Databricks, PySpark, and SQL on Cloud distributions like AWS. The ideal candidate should have hands-on experience with Databricks, Spark, SQL, and AWS Cloud platform, especially S3, EMR, Databricks, Cloudera, etc. Roles and Responsibility Design and develop large-scale data pipelines using Databricks, Spark, and SQL. Optimize data operations using Databricks and Python. Develop solutions to meet business needs reflecting a clear understanding of the objectives, practices, and procedures of the corporation, department, and business unit. Evaluate alternative risks and solutions before taking action. Utilize all available resources efficiently. Collaborate with cross-functional teams to achieve business goals. Job Experience working in projects involving data engineering and processing. Proficiency in large-scale data operations using Databricks and overall comfort with Python. Familiarity with AWS compute, storage, and IAM concepts. Experience with S3 Data Lake as the storage tier. ETL background with Talend or AWS Glue is a plus. Cloud Warehouse experience with Snowflake is a huge plus. Strong analytical and problem-solving skills. Relevant experience with ETL methods and retrieving data from dimensional data models and data warehouses. Strong experience with relational databases and data access methods, especially SQL. Excellent collaboration and cross-functional leadership skills. Excellent communication skills, both written and verbal. Ability to manage multiple initiatives and priorities in a fast-paced, collaborative environment. Ability to leverage data assets to respond to complex questions that require timely answers. Working knowledge of migrating relational and dimensional databases on AWS Cloud platform.
Posted 3 weeks ago
4.0 - 6.0 years
3 - 7 Lacs
Noida
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=4 to 6 , jd= Experience with AWS Python AWS CloudFormation Step Functions Glue Lambda S3 SNS SQS IAM Athena EventBridge and API Gateway Experience in Python development Expertise in multiple applications and functionalities Domain skills with a quick learning inclination Good SQL knowledge and understanding of databases Familiarity with MS Office and SharePoint High aptitude and excellent problem solving skills Strong analytical skills Interpersonal skills and ability to influence stakeholders , Title=Python Developer, ref=6566420
Posted 3 weeks ago
8.0 - 12.0 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Lead design, development, and deployment of cloud-native and hybrid solutions on AWS and GCP. Ensure robust infrastructure using services like GKE, GCE, Cloud Functions, Cloud Run (GCP) and EC2, Lambda, ECS, S3, etc. (AWS).
Posted 3 weeks ago
6.0 - 11.0 years
25 - 40 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
-Design, build & deployment of cloud-native and hybrid solutions on AWS and GCP -Exp in Glue, Athena, PySpark & Step function, Lambda, SQL, ETL, DWH, Python, EC2, EBS/EFS, CloudFront, Cloud Functions, Cloud Run (GCP), GKE, GCE, EC2, ECS, S3, etc
Posted 3 weeks ago
5.0 - 10.0 years
20 - 25 Lacs
Gurugram
Work from Office
Required Desired Prior experience with writing and debugging python Prior experience with building data pipelines. Prior experience Data lakes in an aws environment Prior experience with Data warehouse technologies in an aws environment Prior experience with AWS EMR Prior experince with pyspark Candidate should have prior experience with AWS and Azure. Additional Cloud-based tools experience is important (see skills section) Additional desired skills include experience with the following: Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience with Python and experience with libraries such as pandas and numpy. Experience with pyspark. Experience building and optimizing big data data pipelines, architectures, and data sets.
Posted 3 weeks ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 3 weeks ago
7.0 - 11.0 years
30 - 35 Lacs
Bengaluru
Work from Office
1. The resource should have knowledge on Data Warehouse and Data Lake 2. Should aware of building data pipelines using Pyspark 3. Should be strong in SQL skills 4. Should have exposure to AWS environment and services like S3, EC2, EMR, Athena, Redshift etc 5. Good to have programming skills in Python
Posted 3 weeks ago
6.0 - 10.0 years
30 - 35 Lacs
Kochi, Hyderabad, Coimbatore
Work from Office
1. The resource should have knowledge on Data Warehouse and Data Lake 2. Should aware of building data pipelines using Pyspark 3. Should be strong in SQL skills 4. Should have exposure to AWS environment and services like S3, EC2, EMR, Athena, Redshift etc 5. Good to have programming skills in Python
Posted 3 weeks ago
7.0 - 12.0 years
30 - 45 Lacs
Noida, Pune, Gurugram
Hybrid
Role: Lead Data Engineer Experience: 7-12 years Must-Have: 7+ years of relevant experienceinData Engineeringand delivery. 7+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations. Have experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture) Good experience withAWS cloudand microservices AWS glue, S3, Python, and Pyspark. Good aptitude, strong problem-solving abilities, analytical skills, and ability to take ownership asappropriate. Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment. Experience working in Agile Methodology Ability to learn and help the team learn new technologiesquickly. Excellentcommunication and coordination skills Good to have: Have experience in DevOps tools (Jenkins, GIT etc.) and practices, continuous integration, and delivery (CI/CD) pipelines. Spark, Python, SQL (Exposure to Snowflake), Big Data Concepts, AWS Glue. Worked on cloud implementations (migration, development, etc. Role & Responsibilities: Be accountable for the delivery of the project within the defined timelines with good quality. Working with the clients and Offshore leads to understanding requirements, coming up with high-level designs, and completingdevelopment,and unit testing activities. Keep all the stakeholders updated about the task status/risks/issues if there are any. Keep all the stakeholders updated about the project status/risks/issues if there are any. Work closely with the management wherever and whenever required, to ensure smooth execution and delivery of the project. Guide the team technically and give the team directions on how to plan, design, implement, and deliver the projects. Education: BE/B.Tech from a reputed institute.
Posted 3 weeks ago
11.0 - 13.0 years
35 - 50 Lacs
Bengaluru
Work from Office
Principal AWS Data Engineer Location : Bangalore Experience : 9 - 12 years Job Summary: In this key leadership role, you will lead the development of foundational components for a Lakehouse architecture on AWS and drive the migration of existing data processing workflows to the new Lakehouse solution. You will work across the Data Engineering organisation to design and implement scalable data infrastructure and processes using technologies such as Python, PySpark, EMR Serverless, Iceberg, Glue and Glue Data Catalog. The main goal of this position is to ensure successful migration and establish robust data quality governance across the new platform, enabling reliable and efficient data processing. Success in this role requires deep technical expertise, exceptional problem-solving skills, and the ability to lead and mentor within an agile team. Must Have Tech Skills: Prior Principal Engineer experience, leading team best practices in design, development, and implementation, mentoring team members, and fostering a culture of continuous learning and innovation Extensive experience in software architecture and solution design, including microservices, distributed systems, and cloud-native architectures. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Deep technical knowledge of AWS data services and engineering practices, with demonstrable experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience of delivering Lakehouse solutions/architectures Nice To Have Tech Skills: Knowledge of additional programming languages and development tools to provide flexibility and adaptability across varied data engineering projects A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Lead complex projects autonomously, fostering an inclusive and open culture within development teams. Mentor team members and lead technical discussions. Provides strategic guidance on best practices in design, development, and implementation. Leads the development of high-quality, efficient code and develops necessary tools and applications to address complex business needs Collaborates closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading the design and planning of these components. Drive the migration of existing data processing workflows to a Lakehouse architecture, leveraging Iceberg capabilities. Serves as an internal subject matter expert in software development, advising stakeholders on best practices in design, development, and implementation Key Skills: Deep technical knowledge of data engineering solutions and practices. Expert in AWS services and cloud solutions, particularly as they pertain to data engineering practices Extensive experience in software architecture and solution design Specialized expertise in Python and Spark Ability to provide technical direction, set high standards for code quality and optimize performance in data-intensive environments. Skilled in leveraging automation tools and Continuous Integration/Continuous Deployment (CI/CD) pipelines to streamline development, testing, and deployment. Exceptional communicator who can translate complex technical concepts for diverse stakeholders, including engineers, product managers, and senior executives. Provides thought leadership within the engineering team, setting high standards for quality, efficiency, and collaboration. Experienced in mentoring engineers, guiding them in advanced coding practices, architecture, and strategic problem-solving to enhance team capabilities. Educational Background: Bachelor’s degree in computer science, Software Engineering, or a related field is essential. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices.
Posted 3 weeks ago
6.0 - 7.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary: Experience : 5 - 8 Years Location : Bangalore Contribute to building state-of-the-art data platforms in AWS, leveraging Python and Spark. Be part of a dynamic team, building data solutions in a supportive and hybrid work environment. This role is ideal for an experienced data engineer looking to step into a leadership position while remaining hands-on with cutting-edge technologies. You will design, implement, and optimize ETL workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires technical expertise, strong problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable experience as a senior data engineer. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience with data services in Lakehouse architecture. Good background and proven experience of data modelling for data platforms Nice To Have Tech Skills: A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Provides guidance on best practices in design, development, and implementation, ensuring solutions meet business requirements and technical standards. Works closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading design and planning of these components. Drive the migration of existing data processing workflows to the Lakehouse architecture, leveraging Iceberg capabilities. Communicates complex technical information clearly, tailoring messages to the appropriate audience to ensure alignment. Key Skills: Deep technical knowledge of data engineering solutions and practices. Implementation of data pipelines using AWS data services and Lakehouse capabilities. Highly proficient in Python, Spark and familiar with a variety of development technologies. Skilled in decomposing solutions into components (Epics, stories) to streamline development. Proficient in creating clear, comprehensive documentation. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Previous Financial Services experience delivering data solutions against financial and market reference data. Solid grasp of Data Governance and Data Management concepts, including metadata management, master data management, and data quality. Educational Background: Bachelor’s degree in computer science, Software Engineering, or related field essential. Bonus Skills: A working knowledge of Indices, Index construction and Asset Management principles.
Posted 3 weeks ago
8.0 - 10.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary: Experience : 4 - 8 years Location : Bangalore The Data Engineer will contribute to building state-of-the-art data Lakehouse platforms in AWS, leveraging Python and Spark. You will be part of a dynamic team, building innovative and scalable data solutions in a supportive and hybrid work environment. You will design, implement, and optimize workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires previous experience of building data products using AWS services, familiarity with Python and Spark, problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable previous experience as a data engineer. Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Nice To Have Tech Skills: Familiar with data services in a Lakehouse architecture. Familiar with technical design practices, allowing for the creation of scalable, reliable data products that meet both technical and business requirements A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Writes high quality code, ensuring solutions meet business requirements and technical standards. Works with architects, Product Owners, and Development leads to decompose solutions into Epics, assisting the design and planning of these components. Creates clear, comprehensive technical documentation that supports knowledge sharing and compliance. Experience in decomposing solutions into components (Epics, stories) to streamline development. Actively contributes to technical discussions, supporting a culture of continuous learning and innovation. Key Skills: Proficient in Python and familiar with a variety of development technologies. Previous experience of implementing data pipelines, including use of ETL tools to streamline data ingestion, transformation, and loading. Solid understanding of AWS services and cloud solutions, particularly as they pertain to data engineering practices. Familiar with AWS solutions including IAM, Step Functions, Glue, Lambda, RDS, SQS, API Gateway, Athena. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Experienced in Agile development, including sprint planning, reviews, and retrospectives Educational Background: Bachelor’s degree in computer science, Software Engineering, or related essential. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices. Familiar with implementing and optimizing CI/CD pipelines. Understands the processes that enable rapid, reliable releases, minimizing manual effort and supporting agile development cycles.
Posted 3 weeks ago
5.0 - 8.0 years
6 - 10 Lacs
Noida
Work from Office
Position Summary As a staff engineer you will be part of development team and apply your expert technical knowledge, broad knowledge of software engineering best practices, problem solving, critical thinking and creativity to build and maintain software products that achieve technical, business and customer experience goals and inspire other engineers to do the same. You will be responsible towards working with different stakeholders to accomplish business and software engineering goals.Key duties & responsibilitiesEstimates and develops scalable solutions using .Net technologies in a highly collaborative agile environment with strong experience in C#, ASP.net Core, Web API.Maintain relevant documentation around the solutions. Conducts Code Reviews and ensures SOLID principles and standard design patterns are applied to system architectures and implementations.Evaluates, understands and recommends new technology, languages or development practices that have benefits for implementing.Collaborate with the Agile practitioners to help avoid distractions for the team, so that the team is focused on delivering their sprint commitments.Drive adoption of modern engineering practices such as Continuous Integration, Continuous Deployment, Code Reviews, TDD, Functional\Non-Functional testing, Test Automation, Performance Engineering etc. to deliver high-quality, high-value softwareFoster a culture and mindset of continuous learning to develop agility using the three pillars transparency, inspection and adaptation across levels and geographies.Mentors other members of the development team.Leads sessions with scrum team members to structure solution source code and designs implementation approaches optimizing for code that follows engineering best practices, and maximizes maintainability, testability and performance.Relevant exposure to agile ways of working preferably Scrum and KanbanSkills and KnowledgeB.E/B. Tech/MCA or equivalent professional degree5-8 years of experience designing and developing n-tier Web applications using .Net Framework, .Net Core, ASP.Net, WCF and C#, MVC 4/5 Web Development, RESTful API Services, Web API and JSONWell versed with C#, modern UI technologies and database\ORM technologies.Must have solid understanding of modern architectural and design patterns.Comprehensive knowledge of automation testing and modern testing practices e.g., TDD, BDD etc.Strong exposure in one or more Implementation of CI & CD using Jenkins, Dockers containerization.Strong exposure to Agile software development methodologies and enabling tools such as Jira, ConfluenceExcellent communicator with demonstrable ability of influencing decisionsKnowledge of healthcare revenue cycle management, HL7, EMR systems, HIPAA, FHIR would be preferred.Good to have knowledge on Azure Cloud.Good working understanding of application architecture concepts like microservices, Domain-Driven Design, broker pattern/message bus, event-driven, CQRS, ports & adapters/hexagonal/onion, SOA would be preferredKey competency profileSpot new opportunities by anticipating change and planning accordingly.Find ways to better serve customers and patients.Be accountable for customer service of highest quality.Create connections across teams by valuing differences and including others.Own your development by implementing and sharing your learnings.Motivate each other to perform at our highest level.Help people improve by learning from successes and failures.Work the right way by acting with integrity and living our values every day.Succeed by proactively identifying problems and solutions for yourself and others. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France