Jobs
Interviews

1401 Data Bricks Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

25 - 27 Lacs

Bengaluru

Work from Office

About the Role As DevOps Engineer - IV, you will design systems capable of serving as the brains of complex distributed products. In addition, you will also closely mentor younger engineers on the team and contribute to team building.Overall, you will be a strong technologist at Meesho who cares about code modularity, scalability, re-usability. What you will do Develop reusable Infrastructure code and testing frameworks for Infrastructure. Develop tools and frameworks to allow Meesho engineers to provision and manage Infrastructure access controls. Design and develop solutions for cloud security, secrets-management and key rotations. Design a centralized logging and metrics platform that can handle Meeshos scale. Take on new Infrastructure requirements and develop infrastructure code Work with service teams to help them onboard container platform. Scale the Meesho platform to handle millions of requests concurrently. Drive solutions to reduce MTTR and MTTD, enabling High Availability and Disaster Recovery. What you will need Must Have : Bachelors / Masters in Computer Science 8-12 years of in-depth and hands-on professional experience in DevOps /Systems Engineering domain Proficiency in Strong Systems, Linux, Open Source, Infra Structure Engineering, DevOps fundamentals. Proficiency on container platforms like Docker, Kubernetes, EKS/GKE, etc.Exceptional design and architectural skills Experience in building large scale distributed systems Experience in Scalable Systems - transactional systems (B2C) Expertise in Capacity Planning Design, cost and effort estimations and cost-optimisation Ability to deliver the best operations tooling and practices, including CI/CD In-depth understanding of SDLC Ability to write infrastructure as code for public or private clouds Ability to implement modern cloud Integration architecture Knowledge of configuration and infra management (Terraform) or CI tools (Any) Knowledge of coding languagePython, Go (proficiency in any one). Ability to architect and implement end-to-end monitoring of solutions in the cloud Ability to design for failover, high availability, MTTR, MTTD, RTO, RPO and so on Good to have : Good to have hands on experience on data processing frameworks(eg. Spark, Databricks) Familiar with Big Data Technologies. Experience with DataOps concepts and tools(eg. Airflow, Zeplin) Expertise in Security Hardening of cloud infrastructure application/web server against known/unknown vulnerabilities Understanding of compliance and security Ability to assess business needs and requirements to ensure appropriate approaches Ability to define and report on business and processes metrics Ability to balance governance, ownership and freedom against reliability Ability to develop and motivate individual contributors on the team

Posted 2 weeks ago

Apply

4.0 - 6.0 years

20 - 35 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Hi All, Greetings for the day!! We are currently hiring for Data Engineer (Python, Pyspark, and Azure Databricks) for Emids(MNC) at Bangalore location. Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks . Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Note: This is not a contract position, this will be a permanent position with Emids. Interested candidates Can Share Your Updated Profile with details for below Email. NAME: CCTC: ECTC: Notice Period: Offers in Hand : Email ID: Ravi.chekka@emids.com

Posted 2 weeks ago

Apply

6.0 - 11.0 years

8 - 16 Lacs

Hyderabad, Pune, Chennai

Hybrid

Data Engineer having good experience on Azure Databricks and Python Must Have Databricks Python Azure Good to have ADF Candidate must be proficient in Databricks

Posted 2 weeks ago

Apply

7.0 - 12.0 years

1 - 2 Lacs

Hyderabad

Remote

Role & responsibilities We are looking for a highly experienced Senior Cloud Data Engineer to lead the design, development, and optimization of our cloud-based data infrastructure. This role requires deep technical expertise in AWS services, data engineering best practices, and infrastructure automation. You will be instrumental in shaping our data architecture and enabling data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain scalable and secure data pipelines using AWS Glue , Redshift , and Python . Develop and optimize SQL queries and stored procedures for complex data transformations and migrations. Automate infrastructure provisioning and deployment using Terraform , ensuring repeatability and compliance. Architect and implement data lake and data warehouse solutions on AWS. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to deliver high-quality data solutions. Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost-efficiency. Implement data quality checks, validation frameworks, and monitoring tools. Ensure data security, privacy, and compliance with industry standards and regulations. Lead code reviews, mentor junior engineers, and promote best practices in data engineering. Participate in capacity planning, cost optimization, and performance tuning of cloud data infrastructure. Evaluate and integrate new tools and technologies to improve data engineering capabilities. Document technical designs, processes, and operational procedures. Support business intelligence and analytics teams by ensuring timely and accurate data availability. Required Skills & Experience: 10+ years of experience in data engineering or cloud data architecture. Strong expertise in AWS Redshift , including schema design, performance tuning, and workload management. Proficiency in SQL and stored procedures for ETL and data migration tasks. Hands-on experience with Terraform for infrastructure as code (IaC) in AWS environments. Deep knowledge of AWS Glue for ETL orchestration and job development. Advanced programming skills in Python , especially for data processing and automation. Solid understanding of data warehousing, data lakes, and cloud-native data architectures. Preferred candidate profile AWS Certifications (e.g., AWS Certified Data Analytics Specialty, AWS Certified Solutions Architect). Experience with CI/CD pipelines and DevOps practices. Familiarity with additional AWS services like S3, Lambda, CloudWatch, Step Functions, and IAM. Knowledge of data governance, lineage, and cataloging tools (e.g., AWS Glue Data Catalog, Apache Atlas). Experience with real-time data processing frameworks (e.g., Kinesis, Kafka, Spark Streaming).

Posted 2 weeks ago

Apply

6.0 - 11.0 years

15 - 20 Lacs

Pune, Bengaluru

Hybrid

Knowledge/Experience Proven knowledge of physical and logical data modelling in a data warehouse environment including the successful creation of conformed dimensional models from a range of legacy source systems alongside modern SaaS/Cloud business applications Experience within a similar role within insurance (ideally health insurance) or similar complex and regulated industry, and able to demonstrate a sound working business knowledge of its operation. Experienced at capturing technical and business metadata including being able to elicit and create sound definitions for entities and attribute. Practiced and able to query data from source or raw data and reverse engineer an underlying data model and data definitions. Experienced in writing scripts for data transformation using SQL, DDL, DML, and Pyspark. Good knowledge and exposure to software development lifecycles and good engineering practices Can demonstrate a good working knowledge of data modelling patterns and when to use them. Technical skills Entity relationship, dimensional, and NOSQL modelling as appropriate to data warehousing, business intelligence, and analytical approaches using IE or other common notations. SQL, DDL, DML, and Pyspark scripting ERWIN, and Visio data modelling/UML tool Ideally, Azure Data Factory, Azure Dev Ops, and Databricks

Posted 2 weeks ago

Apply

5.0 - 7.0 years

9 - 14 Lacs

Noida

Work from Office

Skilled AWS Databricks Platform Administrator to manage and optimize our Databricks environment. The ideal candidate will have strong expertise in user access management, user persona development, and the ability to collaborate with architects to implement configuration changes. This role involves ensuring the security, performance, and reliability of the Databricks platform while supporting users and maintaining compliance with organizational policies. Good experience with SDLC Databricks platform administration is a must Must have security and access control experience, user provisioning Services integration experience Should be able to work with enterprise architects Good to have - API experience Required Skills & Qualifications 5-7 years of experience as a Databricks Administrator or similar role. Strong experience with AWS services (IAM, S3, EC2, Lambda, Glue, etc.). Expertise in Databricks administration, workspace management, and security configurations . Hands-on experience with AD groups, user access management, RBAC, and IAM policies . Experience in developing and managing user personas within enterprise environments. Strong understanding of network security, authentication, and data governance . Proficiency in Python, SQL, and Spark for troubleshooting and automation. Familiarity with Terraform, CloudFormation, or Infrastructure as Code (IaC) is a plus. Knowledge of CI/CD pipelines and DevOps best practices is desirable. Excellent communication and documentation skills . Preferred Certifications AWS Certified Solutions Architect Associate Professional Databricks Certified Data Engineer Administrator Certified Information Systems Security Professional (CISSP) Nice to have Mandatory Competencies Data Science - Databricks Cloud - AWS Cloud - Azure Cloud - AWS Lambda Data on Cloud - AWS S3 Python - Python Database - SQL Big Data - SPARK Beh - Communication and collaboration

Posted 2 weeks ago

Apply

10.0 - 15.0 years

8 - 18 Lacs

Nashik

Work from Office

Proven exp as a Data Architect, preferably in a healthcare setting. Exp in data modeling tools, database management systems (e.g., SQL, NoSQL), & ETL processes. Exp with cloud based databases. Exp with data warehousing, data lakes Required Candidate profile In-depth knowledge of healthcare data standards, such as HL7, ICD-10, CPT, and SNOMED. developing & maintaining data architecture, ensuring data quality, & supporting data-driven.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

8 - 18 Lacs

Nagpur

Work from Office

Proven exp as a Data Architect, preferably in a healthcare setting. Exp in data modeling tools, database management systems (e.g., SQL, NoSQL), & ETL processes. Exp with cloud based databases. Exp with data warehousing, data lakes Required Candidate profile In-depth knowledge of healthcare data standards, such as HL7, ICD-10, CPT, and SNOMED. developing & maintaining data architecture, ensuring data quality, & supporting data-driven.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

8 - 18 Lacs

Sindhudurg

Work from Office

Proven exp as a Data Architect, preferably in a healthcare setting. Exp in data modeling tools, database management systems (e.g., SQL, NoSQL), & ETL processes. Exp with cloud based databases. Exp with data warehousing, data lakes Required Candidate profile In-depth knowledge of healthcare data standards, such as HL7, ICD-10, CPT, and SNOMED. developing & maintaining data architecture, ensuring data quality, & supporting data-driven.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

8 - 18 Lacs

Pune

Work from Office

Proven exp as a Data Architect, preferably in a healthcare setting. Exp in data modeling tools, database management systems (e.g., SQL, NoSQL), & ETL processes. Exp with cloud based databases. Exp with data warehousing, data lakes Required Candidate profile In-depth knowledge of healthcare data standards, such as HL7, ICD-10, CPT, and SNOMED. developing & maintaining data architecture, ensuring data quality, & supporting data-driven.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

About the Opportunity Job TypeApplication 31 July 2025 Strategic Impact As a Senior Data Engineer, you will directly contribute to our key organizational objectives: Accelerated Innovation Enable rapid development and deployment of data-driven products through scalable, cloud-native architectures Empower analytics and data science teams with self-service, real-time, and high-quality data access Shorten time-to-insight by automating data ingestion, transformation, and delivery pipelines Cost Optimization Reduce infrastructure costs by leveraging serverless, pay-as-you-go, and managed cloud services (e.g., AWS Glue, Databricks, Snowflake) Minimize manual intervention through orchestration, monitoring, and automated recovery of data workflows Optimize storage and compute usage with efficient data partitioning, compression, and lifecycle management Risk Mitigation Improve data governance, lineage, and compliance through metadata management and automated policy enforcement Increase data quality and reliability with robust validation, monitoring, and alerting frameworks Enhance system resilience and scalability by adopting distributed, fault-tolerant architectures Business Enablement Foster cross-functional collaboration by building and maintaining well-documented, discoverable data assets (e.g., data lakes, data warehouses, APIs) Support advanced analytics, machine learning, and AI initiatives by ensuring timely, trusted, and accessible data Drive business agility by enabling rapid experimentation and iteration on new data products and features Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics Be accountable for technical delivery and take ownership of solutions Lead a team of senior and junior developers providing mentorship and guidance Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress Drive technical innovation within the department to increase code reusability, code quality and developer productivity Challenge the status quo by bringing the very latest data engineering practices and techniques About youCore Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance OptimizationExperience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion Experience using orchestration tools (Airflow, Control-M, etc...) Significant experience in software engineering practices using GitHub, code verification, validation, and use of copilots Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes Strong experience in API development using Python based frameworks like FastAPI Key Soft Skills: Problem-SolvingLeadership experience in problem-solving and technical decision-making. CommunicationStrong in strategic communication and stakeholder engagement. Project ManagementExperienced in overseeing project lifecycles working with Project Managers to manage resources.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Work from Office

Lead Data Engineer Data Management Job description Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad Role Overview: Accordion is looking for Lead Data Engineer. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in-depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Lead Data Engineer should be able to evaluate existing architectures and recommend way to upgrade and improve the performance of architectures both on-premises and cloud-based solutions. A successful Lead Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Partners with clients to understand their business and create comprehensive business requirements. Develops end-to-end Business Intelligence framework based on requirements including recommending appropriate architecture (on-premises or cloud), analytics and reporting. Works closely with the business and technology teams to guide in solution development and implementation. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Curiosity to learn new tools and technologies to provide futuristic solutions for clients. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. More than 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite.) In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (any one of Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.) Good understanding of Azure (OR) AWS: Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services) or AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

32 - 40 Lacs

Bengaluru

Work from Office

: Job TitleProject & Change Lead, AVP LocationBangalore, India Role Description We are looking for an experienced Business Implementation Change Manager to lead a variety of regional/global change initiatives. Utilizing the tenets of PMI, you will lead and/or support cross-functional initiatives that transform the way we run our operations. If you like to solve complex problems, have a gets things done attitude and are looking for a highly visible dynamic role where your voice is heard and your experience is appreciated, come talk to us! What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Responsible for Business Implementation change management planning, execution and reporting adhering to governance standards ensuring transparency around progress status; Using data to tell the implementation story, maintain risk management controls, monitor, resolve as appropriate and communicate initiatives risks; Collaborate with other departments as required to execute on timelines to meet the strategic goals As part of the larger team, accountable for the delivery and adoption of the global change portfolio including by not limited to business case development/analysis, reporting, measurements and reporting of adoption success measures and continuous improvement. As required, using data to tell the story, participate in Working Group and Steering Committee to achieve the right level of decision making and progress/ transparency, establishing strong partnership and collaborative relationships with various stakeholder groups to remove constraints to adoption success and carry forward to future projects. As required, developing and documenting end-to-end roles and responsibilities, including process flow, operating procedures, required controls, gathering and documenting business requirements (user stories)including liaising with end-users and performing analysis of gathered data, training on new features/functions, supporting hypercare and adoption constraints.. Heavily involved in product development journey Your skills and experience Overall experience of at least 7-10 years providing business implementation management to complex change programs/projects, communicating and driving transformation initiatives using the tenets of PMI in a highly matrixed environment Banking / Finance/ regulated industry experience of which at least 2 years should be in change / transformation space or associated with change/transformation initiatives a plus Knowledge of client lifecycle processes, procedures and experience with KYC data structures / data flows is preferred. Experience working with management reporting is preferred. Bachelors degree How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

32 - 40 Lacs

Jaipur

Work from Office

: Job TitleProject & Change Lead, AVP LocationJaipur, India Role Description We are looking for an experienced Business Implementation Change Manager to lead a variety of regional/global change initiatives. Utilizing the tenets of PMI, you will lead and/or support cross-functional initiatives that transform the way we run our operations. If you like to solve complex problems, have a gets things done attitude and are looking for a highly visible dynamic role where your voice is heard and your experience is appreciated, come talk to us! What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Responsible for Business Implementation change management planning, execution and reporting adhering to governance standards ensuring transparency around progress status; Using data to tell the implementation story, maintain risk management controls, monitor, resolve as appropriate and communicate initiatives risks; Collaborate with other departments as required to execute on timelines to meet the strategic goals As part of the larger team, accountable for the delivery and adoption of the global change portfolio including by not limited to business case development/analysis, reporting, measurements and reporting of adoption success measures and continuous improvement. As required, using data to tell the story, participate in Working Group and Steering Committee to achieve the right level of decision making and progress/ transparency, establishing strong partnership and collaborative relationships with various stakeholder groups to remove constraints to adoption success and carry forward to future projects. As required, developing and documenting end-to-end roles and responsibilities, including process flow, operating procedures, required controls, gathering and documenting business requirements (user stories)including liaising with end-users and performing analysis of gathered data, training on new features/functions, supporting hypercare and adoption constraints.. Heavily involved in product development journey Your skills and experience Overall experience of at least 7-10 years providing business implementation management to complex change programs/projects, communicating and driving transformation initiatives using the tenets of PMI in a highly matrixed environment Banking / Finance/ regulated industry experience of which at least 2 years should be in change / transformation space or associated with change/transformation initiatives a plus Knowledge of client lifecycle processes, procedures and experience with KYC data structures / data flows is preferred. Experience working with management reporting is preferred. Bachelors degree How well support you

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

We are seeking a skilled and motivated Data Engineer with hands-on experience in Snowflake , Azure Data Factory (ADF) , and Fivetran . The ideal candidate will be responsible for building and optimizing data pipelines, ensuring efficient data integration and transformation to support analytics and business intelligence initiatives. Key Responsibilities: Design, develop, and maintain robust data pipelines using Fivetran , ADF , and other ETL tools. Build and manage scalable data models and data warehouses on Snowflake . Integrate data from various sources into Snowflake using automated workflows. Implement data transformation and cleansing processes to ensure data quality and integrity. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Monitor pipeline performance, troubleshoot issues, and optimize for efficiency. Maintain documentation related to data architecture, processes, and workflows. Ensure data security and compliance with company policies and industry standards. Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, Information Systems, or a related field. 3+ years of experience in data engineering or a similar role. Proficiency with Snowflake including architecture, SQL scripting, and performance tuning. Hands-on experience with Azure Data Factory (ADF) for pipeline orchestration and data integration. Experience with Fivetran or similar ELT/ETL automation tools. Strong SQL skills and familiarity with data warehousing best practices. Knowledge of cloud platforms, preferably Microsoft Azure. Familiarity with version control tools (e.g., Git) and CI/CD practices. Excellent communication and problem-solving skills. Preferred Qualifications: Experience with Python, dbt, or other data transformation tools. Understanding of data governance, data quality, and compliance frameworks. Knowledge of additional data tools (e.g., Power BI, Databricks, Kafka) is a plus.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Noida, India

Work from Office

Responsibilities & Duties: Collaborate with the Sr. Data Governance Analyst to execute on data quality rules identified in the data catalog Transform business data quality rules into SQL statements and integrate them into the data quality engine Provide on-going support and administration of a customized data quality engine Partner with the Analytics team to implement a data quality scorecard and monitor progress of the companys data quality efforts and adherence to data quality rules, standards and policies Collaborate with data stewards in researching data quality issues, identifying the root cause, understanding the business impact, and recommending corrective actions Participate in data exercises including profiling, mapping, modeling, auditing, testing, etc. as necessary Develop and implement data quality processes and standards and build cross organizational awareness of best practice data quality techniques Facilitate change management, communication, training, and education activities as necessary Strong & Excellent communication Qualification & Key Skills 3+ years working in a data quality role (logistics industry preferred) Strong understanding of data quality best practices and proven experience increasing data quality Technical skills required, including SQL Intellectual curiosity and the ability to easily identify patterns and trends in data Familiarity with data pipelines and data lakehouses (databricks is preferred) Experience designing and/or developing a data quality scorecard (Qlik and Sigma is preferred) Knowledge of modern data quality solutions Strong business and technical acumen with experience across data domains Strong analytical skills, organizational skills, and attention to detail Strong verbal and written communication skills Self-motivated and comfortable with ambiguity Proactively seeks opportunities to broaden and deepen knowledge base and proficiencies Mandatory Competencies Data Science - Data Analyst Database - SQL Data Science - Databricks Data Analysis - Data Analysis Beh - Communication and collaboration

Posted 2 weeks ago

Apply

4.0 - 9.0 years

10 - 15 Lacs

Pune

Work from Office

MS Azure Infra (Must), PaaS will be a plus, ensuring solutions meet regulatory standards and manage risk effectively. Hands-On Experience using Terraform to design and deploy solutions (at least 5+ years), adhering to best practices to minimize risk and ensure compliance with regulatory requirements. Primary Skill AWS Infra along with PaaS will be an added advantage. Certification in Terraform is an added advantage. Certification in Azure and AWS is an added advantage. Can handle large audiences to present HLD, LLD, and ERC. Able to drive Solutions/Projects independently and lead projects with a focus on risk management and regulatory compliance. Secondary Skills Amazon Elastic File System (EFS) Amazon Redshift Amazon S3 Apache Spark Ataccama DQ Analyzer AWS Apache Airflow AWS Athena Azure Data Factory Azure Data Lake Storage Gen2 (ADLS) Azure Databricks Azure Event Hub Azure Stream Analytics Azure Synapse Analytics BigID C++ Cloud Storage Collibra Data Governance (DG) Collibra Data Quality (DQ) Data Lake Storage Data Vault Modeling Databricks DataProc DDI Dimensional Data Modeling EDC AXON Electronic Medical Record (EMR) Extract, Transform & Load (ETL) Financial Services Logical Data Model (FSLDM) Google Cloud Platform (GCP) BigQuery Google Cloud Platform (GCP) Bigtable Google Cloud Platform (GCP) Dataproc HQL IBM InfoSphere Information Analyzer IBM Master Data Management (MDM) Informatica Data Explorer Informatica Data Quality (IDQ) Informatica Intelligent Data Management Cloud (IDMC) Informatica Intelligent MDM SaaS Inmon methodology Java Kimball Methodology Metadata Encoding & Transmission Standards (METS) Metasploit Microsoft Excel Microsoft Power BI NewSQL noSQL OpenRefine OpenVAS Performance Tuning Python R RDD Optimization SaS SQL Tableau Tenable Nessus TIBCO Clarity

Posted 2 weeks ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Hyderabad

Work from Office

Who are we? CDK Global is the largest technical soltuions provider for the automotive retail industry that is setting the the landscape for automotive dealers, original equipment manufacturers (OEMs) and the customers they serve. As a technology company, we have a significant focus moving our applications to the public cloud and in the process working multiple transformation/modernization Be Part of Something Bigger Each year, more than three percent of the U.S. gross domestic product (GDP) is attributed to the auto industry, which flows through our customer, the auto dealer. Its time you joined an evolving marketplace where research and development investment is measured in the tens of billions. Its time you were a part of something bigger. Were expanding our workforce engineers, architects, developers and more onboarding early adopters who can optimize, pivot and keep pace with ever-evolving development roadmaps and applications. Join Our Team Growth potential, flexibility and material impact on the success and quality of a next-gen, enterprise software product make CDK an excellent choice for those who thrive in challenging, fast-paced engineering environments. The possibilities for impact are endless. We have exceptional opportunities to evolve our industry by driving change through new technology. If youre ready for high-impact, youre ready for CDK. Location: Hyderbad, India Role: Define/Maintain/Implement CDKs Public Clould standards including secrets management, storage, compute, networking, account management, database and operations. Leverage tools like AWS Trusted Advisor, 3rd party Cloud Cost Management tools and scripting to identify and drive cost optimization. This will include working with Application owners to achieve the cost savings. Design and implement Cloud Security Controls that creates guard rails for application teams to work within ensuring proper platform security for applications deployed within the CDK cloud environments. Design/Develop/Implement cloud solutions. Leveraging cloud native services, wrap the appropriate security, automation and service levels to support CDK business needs. Examples of solutions this role will be responsible for developing and supporting are Business Continuity/Backup and Recovery, Identity and Access Management, data services including long term archival, DNS, etc. Develop/maintain/implement cloud platform standards (User Access & Roles, tagging, security/compliance controls, operations management, performance management and configuration management) Responsible for writing and eventual automation of operational run-books for operations. Assist application teams with automating their production support run-books (automate everywhere) Assist application teams when they have issues using AWS services where they are not are fully up to speed in their use. Hands on development of automation solutions to support application teams. Define and maintain minimum application deployment standards (governance, cost management and tech debt) Optimizing and tuning designs based on performance and root cause analysis Analysis of existing solutions alignment to infrastructure standards and providing feedback to both evolve and mature the product solutions and CDK public cloud standards. Essential Duties & Skills: This is a hands-on role where the candidate will take on technical tasks where in depth knowledge on usage and public cloud best practices. Some of the areas within AWS where you will be working include: Compute: EC2, EKS. RDS, Lambda Networking: Load Balancing (ALB/ELB), VPN, Transit Gateways, VPCs, Availablity Zones/Regions Storage: EBS, S3, Archive Services, AWS Backup Security: AWS Config, Cloud Watch, Cloud Trail, Route53, Guard Duty, Detective, Inspector, Security Hub, Secrets Server, KMS, AWS Shield, Security Groups,.AWS Identity and Access Management, etc. Cloud Cost Optimization: Cost Optimizer, Trusted Advisor, Cost Explorer, Harness Cloud Clost Management or equivalent cost management tools. Preferred: Experience with 3rd party SaaS solutions like DataBricks, Snowflake, Confluent Kafka Broad understanding/experience across full stack infrastructure technologies Site Reliablity Engineering practices Github/Artifactory/Bamboo/Terraform Database solutions (SQL/NoSQL) Containerization Solutions (Docker, Kubernetes) DevOps processes and tooling Message queuing, data streaming and caching solutions Networking principles and concepts Scripting and development; preferred Python & Java languages Server based operating systems (Windows/Linux) and Web Services (IIS, Apache) Experience of designing, optimizing and troubleshooting public cloud platforms associated with large, complex application stacks Have clear and concise communication and be comfortable working with at all levels in the organization Capable of managing and prioritize multiple projects with competing resource requirements and timelines Years of Experience: 4-5 yrs+ working in the AWS public cloud environment AWS Solution Architect Professional certification preferred Experience with Infrastructure as code (CloudFormation, Terraform)

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

What this job involves: JLL, an international real estate management company, is seeking an Data Engineer to join our JLL Technologies Team. We are seeking candidates that are self-starters to work in a diverse and fast-paced environment that can join our Enterprise Data team. We are looking for a candidate that is responsible for designing and developing of data solutions that are strategic for the business using the latest technologies Azure Databricks, Python, PySpark, SparkSQL, Azure functions, Delta Lake, Azure DevOps CI/CD. Responsibilities Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements. Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities. Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate. Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform. Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org. Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data. Using PySpark and Spark SQL, extract, manipulate, and transform data from various sources, such as databases, data lakes, APIs, and files, to prepare it for analysis and modeling. Build and optimize ETL workflows using Azure Databricks and PySpark. This includes developing efficient data processing pipelines, data validation, error handling, and performance tuning. Perform the unit testing, system integration testing, regression testing and assist with user acceptance testing. Articulates business requirements in a technical solution that can be designed and engineered. Consults with the business to develop documentation and communication materials to ensure accurate usage and interpretation of JLL data. Implement data security best practices, including data encryption, access controls, and compliance with data protection regulations. Ensure data privacy, confidentiality, and integrity throughout the data engineering processes. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Experience & Education Minimum of 4 years of experience as a data developer using Python, PySpark, Spark Sql, ETL knowledge, SQL Server, ETL Concepts. Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Experience in Azure Cloud Platform, Databricks, Azure storage. Effective written and verbal communication skills, including technical writing. Excellent technical, analytical and organizational skills. Technical Skills & Competencies Experience handling un-structured, semi-structured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues Hands on Experience and knowledge on real time/near real time processing and ready to code Hands on Experience in PySpark, Databricks, and Spark Sql. Knowledge on json, Parquet and Other file format and work effectively with them No Sql Databases Knowledge like Hbase, Mongo, Cosmos etc. Preferred Cloud Experience on Azure or AWS Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams. Youll join an entrepreneurial, inclusive culture. One where we succeed together across the desk and around the globe. Where like-minded people work naturally together to achieve great things. Our Total Rewards program reflects our commitment to helping you achieve your ambitions in career, recognition, well-being, benefits and pay. Join us to develop your strengths and enjoy a fulfilling career full of varied experiences. Keep those ambitions in sights and imagine where JLL can take you.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad, Ahmedabad

Work from Office

Grade Level (for internal use): 10 The Team: We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact: Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. Whats in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of dataideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk, Spark ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications: Bachelor's/Masters Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Kafka, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks , SPARK , Scalatechnologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Spark, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Bengaluru

Work from Office

What you will do Build robust backend services and APIs using Python (FastAPI, asyncio) for GenAI workflows and LLM-based systems. Develop and maintain GenAI applications using tools like LangChain, LangGraph, and Cohere, integrating them with custom APIs and data sources. Contribute to systems that enable intelligent routing of prompts, dynamic tool execution, and seamless model-data integration across multiple sources Write clean, modular code for model integration, semantic search, and multi-step agent workflows. Package and deploy applications using Docker and Kubernetes, ensuring scalability and security. Collaborate with data engineers, AI scientists, and infra teams to ship end-to-end features quickly, without sacrificing quality. What you need to succeed Must Have Strong backend development skills in Python Solid understanding of machine learning and generative AI Degree in Computer Science or Engineering Experience deploying services with Docker and Kubernetes in a cloud environment. Familiarity with LLM APIs (OpenAI, Cohere) and how to build prompt-based applications around them. Comfortable writing and debugging PySpark jobs and working with Delta Lake in Databricks. Experience working with Git workflows, CI/CD, and container-based deployments. Nice to Have Experience with LangChain, LangGraph, or other LLM orchestration frameworks. Experience building and deploying MCP Servers, AI Agents Experience with any vector database. Experience with ML FLow

Posted 2 weeks ago

Apply

10.0 - 14.0 years

15 - 30 Lacs

Hyderabad, Ahmedabad

Hybrid

Experience-9+ years Location-Hyderabad Job type-Permanent Role & responsibilities Lead Data Engineer to design, develop, and maintain data pipelines and ETL workflows for processing large-scale structured and unstructured data. The ideal candidate will have expertise in Azure Data Services (Azure Data Factory, Synapse, Databricks, SQL, SSIS, and Data Lake) along with big data processing, real-time analytics, and cloud data integration and Team Leading Experience. Key Responsibilities: 1. Data Pipeline Development & ETL/ELT Design and build scalable data pipelines using Azure Data Factory, Synapse Pipelines ,Databricks, SSIS and ADF Connectors like Salesforce. Implement ETL/ELT workflows for structured and unstructured data processing. Optimize data ingestion, transformation, and storage strategies. 2. Cloud Data Architecture & Integration Develop data integration solutions for ingesting data from multiple sources (APIs, databases, streaming data). Work with Azure Data Lake, Azure Blob Storage, and Delta Lake for data storage and processing. 3. Database Management & Optimization Design and maintain cloud data bases (Azure Synapse, BigQuery, Cosmos DB). Optimize SQL queries and indexing strategies for performance. Implement data partitioning, compression, and caching for efficiency. 4. Collaboration & Documentation Document data models, pipeline architectures, and data workflows. Immediate joiners are preferred.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Noida

Remote

Experience: 4-8 Years Job Location: Remote No. of Position: Multiple Qualifications: B Tech / M Tech/ MCA or Higher Work Timings: 1:30 PM IST to 10:30 PM IST Functional Area: Data Engineering Job Description: We are seeking a skilled Data Engineer with 4 to 8 years of experience to join our team. The ideal candidate will have a strong background in Python programming, along with expertise in AWS or Azure services. The candidate should also possess solid SQL skills and be proficient in web scraping techniques. Role and responsibilities: Develop and maintain data pipelines using Python, PySpark, and SQL to extract, transform, and load data from various sources. Implement and optimize data processing workflows on AWS or Azure cloud platforms. Utilize Databricks or Azure data factory for efficient data storage and processing. Develop and maintain web scraping scripts to gather data from online sources. Collaborate with cross-functional teams to design and implement API endpoints for data access. Work on UI Path automation projects to streamline data extraction and processing tasks. Develop and maintain Django or Flask web applications for internal data management and visualization. Leverage Pandas and other data manipulation libraries for data analysis and preprocessing. Enhance API development skills for integrating data services with external systems. Stay updated with the latest industry trends and technologies, such as Flask, PyTorch, etc., to continuously improve data engineering processes. Skills, Knowledge, Experience: Bachelor's degree in Computer Science, Engineering, or related field. 4 to 8 years of experience in data engineering roles. Proficiency in Python programming language. Strong understanding of AWS or Azure cloud services. Solid SQL skills for querying and manipulating data. Previous experience with web scraping techniques and tools. Hands-on experience with Django web framework. Knowledge of API development and integration. Experience with PySpark for big data processing. Proficiency in Pandas for data manipulation and analysis. Familiarity with UI Path for automation or Power Automate is advantageous. Experience with Databricks. Familiarity with Flask and PyTorch is a plus. Experience working with USA or European clients is a plus. Experience working with multi-vendor, multi-culture, distributed offshore, and onshore development teams in a dynamic and complex environment will be helpful in day-to-day working. Must have excellent written and verbal communication skills. The candidate should be able to present his suggestions and explain the technical approach.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

12 - 16 Lacs

Pune

Work from Office

Overview We are looking for a Senior Data Engineer with deep hands-on expertise in PySpark, Databricks, and distributed data architecture. This individual will play a lead role in designing, developing, and optimizing data pipelines critical to our Ratings Modernization, Corrections, and Regulatory implementation programs under PDB 2.0. The ideal candidate will thrive in fast-paced, ambiguous environments and collaborate closely with engineering, product, and governance teams. Responsibilities Design, develop, and maintain robust ETL/ELT pipelines using PySpark and Databricks . Own pipeline architecture and drive performance improvements through partitioning, indexing, and Spark optimization . Collaborate with product owners, analysts, and other engineers to gather requirements and resolve complex data issues. Perform deep analysis and optimization of SQL queries , functions, and procedures for performance and scalability. Ensure high standards of data quality and reliability via robust validation and cleansing processes. Lead efforts in Delta Lake and cloud data warehouse architecture , including best practices for data lineage and schema management. Troubleshoot and resolve production incidents and pipeline failures quickly and thoroughly. Mentor junior team members and guide best practices across the team. Qualifications Bachelor's degree in Computer Science, Engineering, or a related technical field. 6+ years of experience in data engineering or related roles. Advanced proficiency in Python, PySpark, and SQL . Strong experience with Databricks , BigQuery , and modern data lakehouse design. Hands-on knowledge of Azure or GCP data services. Proven experience in performance tuning and large-scale data processing . Strong communication skills and the ability to work independently in uncertain or evolving contexts What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 2 weeks ago

Apply

3.0 - 8.0 years

15 - 30 Lacs

Chennai, Bengaluru, Mumbai (All Areas)

Work from Office

Azure Databricks and postgresql Spark Python SQL Streaming/Real Time Processing Mail CV to prachi.sharma@krintek.com

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies