Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
18.0 - 23.0 years
15 - 19 Lacs
Hyderabad
Work from Office
About the Role We are seeking a highly skilled and experienced Data Architect to join our team. The ideal candidate will have at least 18 years of experience in Data engineering and Analytics and a proven track record of designing and implementing complex data solutions. As a senior principal data architect, you will be expected to design, create, deploy, and manage Blackbauds data architecture. This role has considerable technical influence within the Data Platform, Data Engineering teams, and the Data Intelligence Center of Excellence at Blackbaud. This individual acts as an evangelist for proper data strategy with other teams at Blackbaud and assists with the technical direction, specifically with data, of other projects. What you'll do Develop and direct the strategy for all aspects of Blackbauds Data and Analytics platforms, products and services Set, communicate and facilitate technical direction more broadly for the AI Center of Excellence and collaboratively beyond the Center of Excellence Design and develop breakthrough products, services or technological advancements in the Data Intelligence space that expand our business Work alongside product management to craft technical solutions to solve customer business problems. Own the technical data governance practices and ensures data sovereignty, privacy, security and regulatory compliance. Continuously challenging the status quo of how things have been done in the past. Build data access strategy to securely democratize data and enable research, modelling, machine learning and artificial intelligence work. Help define the tools and pipeline patterns our engineers and data engineers use to transform data and support our analytics practice Work in a cross-functional team to translate business needs into data architecture solutions. Ensure data solutions are built for performance, scalability, and reliability. Mentor junior data architects and team members. Keep current on technologydistributed computing, big data concepts and architecture. Promote internally how data within Blackbaud can help change the world. What you'll bring 18+ years of experience in data and advanced analytics At least 8 years of experience working on data technologies in Azure/AWS Expertise in SQL and Python Expertise in SQL Server, Azure Data Services, and other Microsoft data technologies. Expertise in Databricks, Microsoft Fabric Strong understanding of data modeling, data warehousing, data lakes, data mesh and data products. Experience with machine learning Excellent communication and leadership skills. Preferred Qualifications Experience working with .Net/Java and Microservice Architecture Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 12 Lacs
Hyderabad
Work from Office
About the role The Data Intelligence Center of Excellence is looking for a high-performing Senior Data Scientist to support Blackbaud customers through the creation and maintenance of intelligent data products . Additionally, the senior data scientist will collaborate with team members on research and thought leadership initiatives . What youll do Use statistical techniques to manage and analyze large volumes of complex data to generate compelling insights to include predictive modeling, storytelling, and data visualization I ntegrate data from multiple sources to create dashboards and other end-user reports Interact with internal customers to identify and define topics for research and experimentation Contribute to white papers, presentations, and conferences as needed C ommunicate insights and findings from analys e s to product, service, and business managers Work with data science team to automate and streamline modeling processes Manage standard tables and programs within the data science infrastructure , providing updates as needed Maintain updated documentation of products and processes Participate in team planning and backlog grooming for data science roadmap What you'll bring We are seeking a Data Scientist with 5+ years of hands-on experience demonstrating strong proficiency in the following areas 2 + years of machine learning and /or statistics experience 2 + years of experience with data analysis in Python, R, SQL, Spark, or similar Comfortable asking questions and performing in-depth research when given vague or incomplete specifications Confidence to learn product functionality on own initiative via back-end research, online training resources, product manuals, and developer forums Experience with Databricks, Databricks SQL Analytics is a plus Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 2 weeks ago
8.0 - 13.0 years
10 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Roles and Responsibilities: Lead Agile project management for data product initiatives, ensuring timely delivery and alignment with strategic business objectives. Collaborate with stakeholders across the organization to identify business needs and translate them into clear data product requirements and user stories. Facilitate Agile ceremonies (daily stand-ups, sprint planning, retrospectives) to maintain team focus and momentum. Manage and prioritize the product backlog in coordination with product owners and data experts to maximize value delivery. Ensure data quality, governance, and compliance standards are met throughout the product lifecycle. Foster cross-functional collaboration among data engineers, data scientists, analysts, and business teams to resolve impediments and steer delivery. Develop and maintain product roadmaps that reflect evolving business priorities and data capabilities. Track project progress using Agile metrics and provide transparent communication to stakeholders. Support continuous improvement by coaching the team on Agile best practices and adapting processes as needed. Define and track the KPIs that transparently reflect the status of key initiatives Direct a team of 5 or more people comprising leads, principals, etc. and indirectly co-ordinate with more people as part of a cross functional teams comprising of varied roles and functions Required Skills and Experience: Bachelors degree or above preferably in Software Engineering Strong understanding of Agile frameworks such as Scrum or Kanban, and experience facilitating Agile teams. Should have experience of leading all Agile ceremonies Knowledge of data product management principles, including requirements definition, data quality, and governance. Excellent communication and stakeholder management skills to bridge technical and business perspectives. Strong business communication, presentation and conflict management skills Experience working with data professionals (data engineers, data scientists, data quality engineers) and understanding data pipelines. Proficiency with Agile project management tools like ADO, Jira or equivalent. Ability to manage competing priorities and adapt plans based on feedback and changing requirements. Proficient in delivery and quality metrics, burn down charts, progress and status reporting Knowledge of 2 or more effort and cost estimation methodologies/ frameworks Proficient in scope (requirements)/ backlog management, quality management, defect prevention and risks and issues management Nice to Have Qualities & Skills Flexibility to learn and apply new methodologies Mortgage Industry experience /knowledge Strong commercial acumen i.e. understanding of pricing models, delivery P&L, budgeting, etc. Basic level understanding of contracts Relevant certifications like Certified Scrum Master (CSM, PMP, Agile Project Management, etc. Knowledge of compliance frameworks like RESPA, TILA, CFPB, and data security standards. Knowledge of Azure Cloud
Posted 2 weeks ago
3.0 - 6.0 years
6 - 11 Lacs
Pune
Work from Office
Position Specific Duties - Supporting data engineering pipelines. Required Skills are- AWS, Databricks, Pyspark, SQL Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Skills (competencies) Verbal Communication
Posted 2 weeks ago
10.0 - 15.0 years
15 - 20 Lacs
Pune
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Use Design thinking and a consultative approach to conceive cutting edge technology solutions for business problems, mining core Insights as a service model Engage with project activities across the Information lifecycle. Understanding client requirements, develop data analytics strategy and solution that meets client requirements Apply knowledge and explain the benefits to organizations adopting strategies relating to NextGen/ New age Data Capabilities Be proficient in evaluating new technologies and identifying practical business cases to develop enhanced business value and increase operating efficiency Architect large scale AI/ML products/systems impacting large scale clients across industry Own end to end solutioning and delivery of data analytics/transformation programs Mentor and inspire a team of data scientists and engineers solving AI/ML problems through R&D while pushing the state-of-the-art solution Liaise with colleagues and business leaders across Domestic & Global Regions to deliver impactful analytics projects and drive innovation at scale Assist sales team in reviewing RFPs, Tender documents, and customer requirements Developing high-quality and impactful demonstrations, proof of concept pitches, solution documents, presentations, and other pre-sales assets Have in-depth business knowledge across a breath of functional areas across sectors such as CPRD/FS/MALS/Utilities/TMT Your Profile B.E. / B.Tech. + MBA (Systems / Data / Data Science/ Analytics / Finance) with a good academic background Minimum 10 years + on Job experience in data analytics with at least 7 years ofCPRD, FS, MALS, Utilities, TMT or other relevant domain experience required Specialization in data science, data engineering or advance analytics filed is strongly recommended Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision Good, applied statistics skills, such as distributions, statistical inference & testing, etc. Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). Proficient in coding in common data science language & tools such as R, Python, Go, SAS, Matlab etc. At least 7 years experience deploying digital and data science solutions on large scale project is required At least 7 years experience leading / managing a data Science team is required Exposure or knowledge in cloud (AWS/GCP/Azure) and big data technologies such as Hadoop, Hive What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 2 weeks ago
4.0 - 6.0 years
3 - 7 Lacs
Hyderabad, Gurugram, Chennai
Work from Office
Your Role Develop and maintain Python scripts to automate identity data collection and synchronization workflows. Design and implement Apache Airflow DAGs to orchestrate reliable and scalable data pipelines. Write and optimize (PL)SQL queries for efficient data extraction, transformation, and loading across systems. Collaborate with cross-functional teams to ensure data integrity, security compliance, and smooth integration. Your Profile 4 to 6 years of experience in data engineering, with a focus on identity data automation and pipeline orchestration. Proficient in Python programming and experienced in developing Airflow DAGs for managing data workflows. Skilled in writing and optimizing (PL)SQL queries, with a strong understanding of relational databases. Familiar with identity governance, REST APIs, and cloud platforms such as AWS or Azure, with a solid grasp of security best practices. Experienced in working with CI/CD pipelines, version control systems like Git, and collaborating across cross-functional teams. What You Will Love Working at Capgemini Develop and maintain Python scripts to automate identity data collection and synchronization across systems Design and implement Apache Airflow DAGs to orchestrate scalable and reliable data pipelines. Clear career progression paths from engineering roles to architecture and consulting. Be part of mission-critical projects that ensure security, compliance, and operational efficiency for Fortune 500 clients. Location - Gurugram,Hyderabad,Chennai,Noida,Pune,Bengaluru,Mumbai
Posted 2 weeks ago
5.0 - 10.0 years
12 - 16 Lacs
Bengaluru
Work from Office
extractapplyapply Key Responsibilities: Mines and extracts data and applies statistics and algorithms necessary to derive insights for Digital Mine products and or services. Supports the generation of an automated insights generation framework for business partners to effectively interpret data Provides actionable insights through data science on Personalization, Search & Navigation, SEO & Promotions, Supply Chain, Services, and other related services . Develops dashboard reports that measure financial results, customer satisfaction, and engagement metrics Conducts deep statistical analysis, including predictive and prescriptive modeling in order to provide the organization a competitive advantage Maintains expert-level knowledge on industry trends, emerging technologies, and new methodologies and applies it to projects Contributes subject-matter expertise on automation and analytical projects, collaborating across functions Translates requirements into an analytical approach; asks the right questions to understand the problem; validates understanding with Stakeholder or Manager Contributes for building the analytic approach to solving a business problem; helps identify the sources, methods, parameters, and procedures to be used; clarifies expectations with stakeholders Leverages deep understanding of statistical techniques and tools to analyze data according to the project plan; communicates with stakeholders to provide updates Prepares final recommendations, ensuring solutions are best-in-class, implementable and scalable in the business Executes plans for measuring impact based on discussions with stakeholders, partners and senior team members Executes projects with full adherence to enterprise project management practices
Posted 2 weeks ago
4.0 - 9.0 years
6 - 12 Lacs
Mumbai, Gurugram, Bengaluru
Work from Office
Process Mining - Data Engineering Consulting Practitioner Find endless opportunities to solve our clients' toughestchallenges, as you work with exceptional people, the latest tech and leading companies across industries. Practice: Operations & Process Transformation, Function: Supply Chain and Operations, Business Unit: Strategy & Consulting, Global Network I Areas of Work: Process Mining | Level: Associate / Analyst / Specialist | Location: Gurugram, Mumbai, Pune, Bengaluru, Chennai, Hyderabad, Kolkata | Overall Relevant Exp: 4-10 years+ Explore an Exciting Career at Accenture Are you an outcome-oriented problem solverDo you enjoy working on transformation strategies for global clientsDoes working in an inclusive and collaborative environment spark your interest Then, is the right place for you to explore limitless possibilities. As a part of our practice, you will help organizations reimagine and transform their supply chains for tomorrowwith a positive impact on the business, society and the planet. Together, lets innovate, build competitive advantage, and improve business, and societal outcomes, in an ever-changing, ever-challenging world. Help us make supply chains work better, faster, and be more resilient, with the following initiatives: Be the process architect to lead process discovery and whiteboarding sessions with business stakeholders. Deliver process discovery or improvement projects using process mining tools. Work on process mining market leaders like Celonis, Signavio , Power automate Process Mining, and so on. Develop business requirements for the implementation of technology solutions for the client. Demonstrate in-depth knowledge of industry trends , SAP transformation journey , new technologies, and tools. Aid in asset, accelerator, use case creation and enhancement Contribute to business development initiatives and display ability to solve complex business problems Bring your best skills forward to excel in the role: Strong analytical skills to reach clear-cut, methodical solutions Ability to solve complex business problems and deliver client delight Excellent communication, interpersonal and presentation skills Cross-cultural competence with an ability to thrive in a dynamic environment Strong team-management skills Your experience counts! MBA from Tier 1 B-school 4+ years of experience with understanding of process mining Hands-on experience of identifying value opportunities using any Process Mining tool, such as Celonis/Signavio and so on Certified expertise as functional value architect for process discovery and mining tools like Celonis, Signavio, Power automate process mining Conceptual understanding of as-is processes in supply chain and ability to design to-be process Good understanding/experience of process mining in SAP transformations or if you have supported mining/process design/journey definition initiatives in SAP projects Experience with automation solutions will be a plus Knowledge of data collection approach, data cleansing, data modelling, process discovery, process analysis and insights Strong communication skills, especially around breaking down complex structures into digestible and relevant points for a diverse set of clients and colleagues, at all levels.
Posted 2 weeks ago
5.0 - 7.0 years
7 - 9 Lacs
Pune
Work from Office
Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 2 weeks ago
6.0 - 9.0 years
8 - 11 Lacs
Pune
Work from Office
About the job : Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 2 weeks ago
10.0 - 20.0 years
16 - 20 Lacs
Ahmedabad
Work from Office
Mode of work : Hybrid or Remote (Hyderabad or Pune candidates preferred for Hybrid working mode or any candidates from Pan India for remote is also fine) Job Description for AI Architect : Mandatory Skills : - SQL - Data science - AI/ML - GenAI - Data Engineering Optional Skills : Data Modelling Client : Genzeon Position : AI Architect Role & Responsibilities : - Strong Python programming expertise - Hands-on experience with AI-related Python tasks (e.g., data analysis, modeling) - Experience with analyzing discrete datasets - Proficiency in working with pre-trained AI models - Problem-solving ability in real-world AI use cases Python Proficiency : - Speed and accuracy in writing code - Understanding of syntax and debugging without external help - Ability to solve complex problems with minimal Google reliance. AI & Data Analysis Skills : - Hands-on expertise in data manipulation and analysis - Understanding of AI modeling and implementation Problem-Solving & Debugging : - Ability to analyze error logs and fix issues quickly - Logical approach to troubleshooting without relying on trivial searches Practical Application & Use Cases : - Ability to apply AI/ML techniques to real-world datasets - Experience with implementing pre-trained models effectively Interview Task : - Given a dataset , candidates must perform multivariate analysis. - Implement modeling using pre-trained models. - Debugging ability without excessive reliance on Google. - Live coding assessment with an open-book approach (Google allowed but should not rely on basic algorithm searches). Role-Specific Expertise : - If applying for AI Architect, the assessment will focus on AI-related tasks. - If applying for Data Engineer, tasks will be aligned with data engineering requirements
Posted 2 weeks ago
3.0 - 6.0 years
9 - 13 Lacs
Ahmedabad
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 weeks ago
10.0 - 20.0 years
16 - 20 Lacs
Hyderabad, Pune
Work from Office
Mode of work : Hybrid or Remote (Hyderabad or Pune candidates preferred for Hybrid working mode or any candidates from Pan India for remote is also fine) Mandatory Skills : - SQL - Data science - AI/ML - GenAI - Data Engineering Optional Skills : Data Modelling Client : Genzeon Position : AI Architect Role & Responsibilities : - Strong Python programming expertise - Hands-on experience with AI-related Python tasks (e.g., data analysis, modeling) - Experience with analyzing discrete datasets - Proficiency in working with pre-trained AI models - Problem-solving ability in real-world AI use cases Python Proficiency : - Speed and accuracy in writing code - Understanding of syntax and debugging without external help - Ability to solve complex problems with minimal Google reliance. AI & Data Analysis Skills : - Hands-on expertise in data manipulation and analysis - Understanding of AI modeling and implementation Problem-Solving & Debugging : - Ability to analyze error logs and fix issues quickly - Logical approach to troubleshooting without relying on trivial searches Practical Application & Use Cases : - Ability to apply AI/ML techniques to real-world datasets - Experience with implementing pre-trained models effectively Interview Task : - Given a dataset , candidates must perform multivariate analysis. - Implement modeling using pre-trained models. - Debugging ability without excessive reliance on Google. - Live coding assessment with an open-book approach (Google allowed but should not rely on basic algorithm searches). Role-Specific Expertise : - If applying for AI Architect, the assessment will focus on AI-related tasks. - If applying for Data Engineer, tasks will be aligned with data engineering requirements
Posted 2 weeks ago
7.0 - 10.0 years
9 - 12 Lacs
Pune
Work from Office
About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.
Posted 2 weeks ago
3.0 - 6.0 years
9 - 13 Lacs
Pune
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 weeks ago
3.0 - 5.0 years
30 - 32 Lacs
India, Bengaluru
Work from Office
Job Title : Data Engineer (DE) / SDE Data Location Bangalore Experience range 3-15 What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotaks Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotaks data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects
Posted 2 weeks ago
6.0 - 9.0 years
9 - 13 Lacs
Ahmedabad
Work from Office
About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 2 weeks ago
5.0 - 7.0 years
10 - 14 Lacs
Ahmedabad
Work from Office
Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 2 weeks ago
10.0 - 13.0 years
12 - 15 Lacs
Hyderabad, Gurugram, Ahmedabad
Work from Office
About the Role: Grade Level (for internal use): 11 The Role: Lead Data Engineer Join Our Team: Step into a dynamic team at the forefront of data innovation! Youll collaborate daily with talented professionals from around the world, designing and developing next-generation data products for our clients. Our team thrives on a diverse toolkit that evolves with emerging technologies, offering you the chance to work in a vibrant, global environment that fosters creativity and teamwork. The Impact: As a Lead Data Engineer at S&P Global, youll be a driving force in shaping the future of our data products. Your expertise will streamline software development and deployment, aligning innovative solutions with business needs. By ensuring seamless integration and continuous delivery, youll enhance product capabilities, delivering high-quality systems that meet the highest standards of availability, security, and performance. Your work will empower our clients with impactful, data-driven solutions, making a real difference in the financial world. Whats in it for You: Career Development: Build a rewarding career with a global leader in financial information and analytics, supported by continuous learning and a clear path to advancement. Dynamic Work Environment: Thrive in a fast-paced, forward-thinking setting where your ideas fuel innovation and your contributions shape groundbreaking solutions. Skill Enhancement: Elevate your expertise on an enterprise-level platform, mastering the latest tools and techniques in software development. Versatile Experience: Dive into full-stack development with hands-on exposure to cloud computing and large-scale data technologies. Leadership Opportunities: Guide and inspire a skilled team, steering the direction of our products and leaving your mark on the future of technology at S&P Global. Responsibilities: Architect and develop scalable cloud applications, utilizing a range of services to create robust, high-performing solutions. Design and implement advanced automation pipelines, streamlining software delivery for fast, reliable deployments. Tackle complex challenges head-on, troubleshooting and resolving issues to ensure our products run flawlessly for clients. Lead by example, providing technical guidance and mentoring to your team, driving innovation and embracing new processes. Deliver high-quality code and detailed system design documents, setting the standard with technical walkthroughs that inspire excellence. Bridge the gap between technical and non-technical stakeholders, turning complex requirements into elegant, actionable solutions. Mentor junior developers, nurturing their growth and helping them build skills and careers under your leadership. What Were Looking For: Were seeking a passionate and experienced professional who brings: 10-13 years of expertise in designing and building data-intensive solutions using distributed computing, with a proven track record of scalable architecture design. 5+ years of hands-on experience with Python, Distributed data processing/bigdata processing Frameworks and data/workflow orchestration tools, demonstrating technical versatility. Proficiency in SQL and NoSQL databases, with deep experience operationalizing data pipelines for large-scale processing. Extensive experience deploying data engineering solutions in public cloud environments, leveraging cloud capabilities to their fullest potential. A strong history of collaborating with business stakeholders and users to shape research directions and deliver robust, maintainable products. A talent for rapid prototyping and iteration, delivering high-quality solutions under tight deadlines. Exceptional communication and documentation skills, with the ability to explain complex ideas to both technical and non-technical audiences. Good to Have Skills: Strong knowledge of Generative AI & advanced tools and technologies that enhance developer productivity. Advanced programming skills used in Bigdata processing eco systems, supported by a portfolio of impactful projects. Expertise in containerization, scripting, and automation practices, ready to excel in a modern development ecosystem. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, andmake decisions with conviction.For more information, visit www.spglobal.com/marketintelligence . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)
Posted 2 weeks ago
6.0 - 11.0 years
8 - 13 Lacs
Gurugram, Bengaluru
Work from Office
About the Role: Grade Level (for internal use): 10 S&P Global - Mobility The Role: Senior Business Analyst - Data Engineering The Team We are seeking a Senior Business Analyst in the Data Engineering Team, you will be responsible for bridging the gap between business needs and technical solutions. You will collaborate with stakeholders to gather requirements, analyze data workflows, and ensure the successful delivery of data-driven projects. The Impact In this role, you will have the opportunity to work in an Agile team, ensuring we meet our customer requirements and deliver impactful quality data. Using your technical skills, you will contribute to data analysis, design and implement complex solutions, and support the business strategy. Responsibilities Collaborate with business stakeholders to identify and document requirements for data engineering projects. Analyze existing data processes and workflows to identify opportunities for improvement and optimization. Work closely with data engineers and data scientists to translate business requirements into technical specifications. Conduct data analysis and data validation to ensure accuracy and consistency of data outputs. Develop and maintain documentation related to data processes, requirements, and project deliverables. Facilitate communication between technical teams and business stakeholders to ensure alignment on project goals and timelines. Participate in project planning and prioritization discussions, providing insights based on business needs. Support user acceptance testing (UAT) and ensure that solutions meet business requirements before deployment. Utilize Jira for project tracking, issue management, and to facilitate Agile project management practices. Stay updated on industry trends and best practices in data engineering and analytics. What Were Looking For Minimum of 6 years of experience as a Business Analyst in a data engineering environment. Strong understanding of data engineering concepts, data modeling, and ETL processes. Proficiency in data visualization tools (e.g., Tableau, Power BI) and SQL for data analysis. Experience with Jira for project management and tracking. Excellent analytical and problem-solving skills, with a keen attention to detail. Strong communication and interpersonal skills, with the ability to work collaboratively in a team environment. Experience with Agile methodologies and project management tools is must. Return to Work Have you taken time out for caring responsibilities and are now looking to return to workAs part of our Return-to-Work initiative (link to career site page when available), we are encouraging enthusiastic and talented returners to apply, and will actively support your return to the workplace. Statement: S&P Global delivers essential intelligence that powers decision making. We provide the worlds leading organizations with the right data, connected technologies and expertise they need to move ahead. As part of our team, youll help solve complex challenges that equip businesses, governments and individuals with the knowledge to adapt to a changing economic landscape. S&P Global Mobility turns invaluable insights captured from automotive data to help our clients understand todays market, reach more customers, and shape the future of automotive mobility. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- 20 - Professional (EEO-2 Job Categories-United States of America), PDMGDV202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)
Posted 2 weeks ago
5.0 - 7.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Key Responsibilities: Design, develop, and optimize scalable data pipelines using Databricks (PySpark, Scala, SQL). Implement ETL/ELT workflows for large-scale data integration across cloud and on-premise environments. Leverage Microsoft Fabric (Data Factory, OneLake, Lakehouse, DirectLake, etc.) to build unified data solutions. Collaborate with data architects, analysts, and stakeholders to deliver business-critical data models and pipelines. Monitor and troubleshoot performance issues in data pipelines. Ensure data governance, quality, and security across all data assets. Work with Delta Lake, Unity Catalog, and other modern data lakehouse components. Automate and orchestrate workflows using Azure Data Factory, Databricks Workflows, or Microsoft Fabric pipelines. Participate in code reviews, CI/CD practices, and agile ceremonies. Required Skills: 5–7 years of experience in data engineering, with strong exposure to Databricks . Proficient in PySpark, SQL, and performance tuning of Spark jobs. Hands-on experience with Microsoft Fabric components . Experience with Azure Synapse, Data Factory, and Azure Data Lake. Understanding of Lakehouse architecture and modern data mesh principles. Familiarity with Power BI integration and semantic modeling (preferred). Knowledge of DevOps, CI/CD for data pipelines (e.g., using GitHub Actions, Azure DevOps). Excellent problem-solving, communication, and collaboration skills. Roles and Responsibilities Key Responsibilities: Design, develop, and optimize scalable data pipelines using Databricks (PySpark, Scala, SQL). Implement ETL/ELT workflows for large-scale data integration across cloud and on-premise environments. Leverage Microsoft Fabric (Data Factory, OneLake, Lakehouse, DirectLake, etc.) to build unified data solutions. Collaborate with data architects, analysts, and stakeholders to deliver business-critical data models and pipelines. Monitor and troubleshoot performance issues in data pipelines. Ensure data governance, quality, and security across all data assets. Work with Delta Lake, Unity Catalog, and other modern data lakehouse components. Automate and orchestrate workflows using Azure Data Factory, Databricks Workflows, or Microsoft Fabric pipelines. Participate in code reviews, CI/CD practices, and agile ceremonies. Required Skills: 5–7 years of experience in data engineering, with strong exposure to Databricks . Proficient in PySpark, SQL, and performance tuning of Spark jobs. Hands-on experience with Microsoft Fabric components . Experience with Azure Synapse, Data Factory, and Azure Data Lake. Understanding of Lakehouse architecture and modern data mesh principles. Familiarity with Power BI integration and semantic modeling (preferred). Knowledge of DevOps, CI/CD for data pipelines (e.g., using GitHub Actions, Azure DevOps). Excellent problem-solving, communication, and collaboration skills.
Posted 2 weeks ago
4.0 - 7.0 years
6 - 10 Lacs
Gurugram
Work from Office
About the Role: Grade Level (for internal use): 10 The Team: Join the TeraHelix team within S&P Globals Enterprise Data Organisation (EDO). We are a dynamic group of highly skilled engineers dedicated to building innovative data solutions that empower businesses. Our team works collaboratively on foundational data products, leveraging cutting-edge technologies to solve real-world client challenges. The Impact: As part of the TeraHelix team, you will contribute to the development of our marquee AI-enabled data products, including TeraHelix's GearBox, ETL Mapper and Data Studio solutions. Your work will directly impact our clients by enhancing their data capabilities and driving significant business value. Whats in it for you: Opportunity to work on a distributed, cloud-native, fully Java tech stack (Java 21+) with UI components built in the Vaadin framework. Engage in skill-building and innovation opportunities in a supportive environment. Collaborate with a diverse group of professionals across data, product, and technology disciplines. Contribute to projects that have a tangible impact on the organisation and the industry. Key Responsibilities: Design, develop and maintain robust data pipelines to support data ingestion, transformation and storage. Write efficient SQL queries for data extraction, manipulation and analysis. Utilise Apache Spark & Python for data processing, automation and integration with various data sources. Collaborate with data scientists and stakeholders to understand data requirements and deliver actionable insights. Implement data quality checks and validation processes to ensure data accuracy and reliability. Analyse large datasets to identify trends, patterns and anomalies that inform business decisions. Create and maintain documentation for data processes, workflows and architecture. Stay updated on industry best practices and emerging technologies in data engineering and analysis. Provide support using data visualisation tools to help stakeholders interpret data effectively. What were looking for: Bachelors degree or higher in Computer Science or a related field. Strong experience in SQL for data manipulation and analysis. Proficiency in Spark (Java, SQL or PySpark) and Python for data processing and automation tasks. Solid understanding of data engineering principles and best practices. Experience with data analytics and the ability to derive insights from complex datasets. Familiarity with big data technologies (e.g. Hadoop, Spark) and cloud data platforms (e.g. AWS, Azure, GCP). Familiarity with data visualisation tools (e.g. Power BI, Tableau, Qlik) and Data Science Notebooks (e.g. Jupyter, Apache Zeppelin) to present findings effectively. Knowledge of financial or capital markets to understand business domain requirements. Excellent problem-solving skills and attention to detail. Strong communication skills for collaboration with cross-functional teams. Nice to have: Experience with Java for data processing or integration tasks. Knowledge of ETL (Extract, Transform, Load) processes and tools. Understanding of data warehousing concepts and architecture. Experience with version control systems (e.g. Git, GitHub, Bitbucket, Azure DevOps). Interest in machine learning and data science concepts. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- , SWP Priority Ratings - (Strategic Workforce Planning)
Posted 2 weeks ago
2.0 - 6.0 years
4 - 8 Lacs
Hyderabad, Ahmedabad, Gurugram
Work from Office
About the Role: Grade Level (for internal use): 09 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team you will build and maintain enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. Whats in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestrationCelery, Redis, Airflow Strong AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)
Posted 2 weeks ago
7.0 - 12.0 years
6 - 10 Lacs
Hyderabad, Ahmedabad, Gurugram
Work from Office
About the Role: Grade Level (for internal use): 10 The RoleSenior Software Developer The Team Do you love to collaborate & provide solutionsThis team comes together across eight different locations every single day to craft enterprise grade applications that serve a large customer base with growing demand and usage. You will use a wide range of technologies and cultivate a collaborative environment with other internal teams. The ImpactWe focus primarily developing, enhancing and delivering required pieces of information & functionality to internal & external clients in all client-facing applications. You will have a highly visible role where even small changes have very wide impact. Whats in it for you - Opportunities for innovation and learning new state of the art technologies - To work in pure agile & scrum methodology Responsibilities: Design and implement software-related projects. Perform analyses and articulate solutions. Design underlying engineering for use in multiple product offerings supporting a large volume of end-users. Develop project plans with task breakdowns and estimates. Manage and improve existing solutions. Solve a variety of complex problems and figure out possible solutions, weighing the costs and benefits. What were Looking For: Basic Qualifications: Bachelor's degree in computer science or Equivalent 7+ years related experience Passionate, smart, and articulate developer Strong C#, .Net and SQL skills Experience implementingWeb Services (with WCF, RESTful JSON, SOAP, TCP), Windows Services, and Unit Tests Dependency Injection Able to demonstrate strong OOP skills Able to work well individually and with a team Strong problem-solving skills Good work ethic, self-starter, and results-oriented Agile/Scrum experience a plus. Exposure to Data Engineering and Big Data technologies like Hadoop, Big data processing engines/Scala, Nifi and ETL is a plus. Experience in Container platforms is a plus Experience working in cloud computing environment like AWS, Azure , GCP etc. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- , SWP Priority Ratings - (Strategic Workforce Planning)
Posted 2 weeks ago
5.0 - 10.0 years
24 - 36 Lacs
Noida
Work from Office
Roles: Design and optimise Snowflake data architectures for AI and analytics. Develop scalable OLAP models for real-time and customer insights. Collaborate with AI, engineering, and product teams. Drive data governance, security, and performance. Annual bonus
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi