Jobs
Interviews

6020 Databricks Jobs - Page 48

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 - 13.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As a Lead Data Engineer at EY, you will play a crucial role in leading large scale solution architecture design and optimization to provide streamlined insights to partners throughout the business. You will lead a team of Mid- and Senior data engineers to collaborate with visualization on data quality and troubleshooting needs. Your key responsibilities will include implementing data processes for the data warehouse and internal systems, leading a team of Junior and Senior Data Engineers in executing data processes, managing data architecture, designing ETL processes, cleaning, aggregating, and organizing data from various sources, and transferring it to data warehouses. You will be responsible for leading the development, testing, and maintenance of data pipelines and platforms to enable data quality utilization within business dashboards and tools. Additionally, you will support team members and direct reports in refining and validating data sets, create, maintain, and support the data platform and infrastructure, and collaborate with various teams to understand data requirements and design solutions that enable advanced analytics, machine learning, and predictive modeling. To qualify for this role, you must have a Bachelor's degree in Engineering, Computer Science, Data Science, or related field, along with 9+ years of experience in software development, data engineering, ETL, and analytics reporting development. You should possess expertise in building and maintaining data and system integrations using dimensional data modeling and optimized ETL pipelines, as well as experience with modern data architecture and frameworks like data mesh, data fabric, and data product design. Other essential skillsets include proficiency in data engineering programming languages such as Python, distributed data technologies like Pyspark, cloud platforms and tools like Kubernetes and AWS services, relational SQL databases, DevOps, continuous integration, and more. You should have a deep understanding of database architecture and administration, excellent written and verbal communication skills, strong organizational skills, problem-solving abilities, and the capacity to work in a fast-paced environment while adapting to changing business priorities. Desired skillsets for this role include a Master's degree in Engineering, Computer Science, Data Science, or related field, as well as experience in a global working environment. Travel requirements may include access to transportation to attend meetings and the ability to travel regionally and globally. Join EY in building a better working world, where diverse teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate across various sectors.,

Posted 1 week ago

Apply

15.0 - 21.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Data Architect with over 15 years of experience, your primary responsibility will be to lead the design and implementation of scalable, secure, and high-performing data architectures. You will collaborate with business, engineering, and product teams to develop robust data solutions that support business intelligence, analytics, and AI initiatives. Your key responsibilities will include designing and implementing enterprise-grade data architectures using cloud platforms such as AWS, Azure, or GCP. You will lead the definition of data architecture standards, guidelines, and best practices while architecting scalable data solutions like data lakes, data warehouses, and real-time streaming platforms. Collaborating with data engineers, analysts, and data scientists, you will ensure optimal solutions are delivered based on data requirements. In addition, you will oversee data modeling activities encompassing conceptual, logical, and physical data models. It will be your duty to ensure data security, privacy, and compliance with relevant regulations like GDPR and HIPAA. Defining and implementing data governance strategies alongside stakeholders and evaluating data-related tools and technologies are also integral parts of your role. To excel in this position, you should possess at least 15 years of experience in data architecture, data engineering, or database development. Strong experience in architecting data solutions on major cloud platforms like AWS, Azure, or GCP is essential. Proficiency in data management principles, data modeling, ETL/ELT pipelines, and modern data platforms/tools such as Snowflake, Databricks, and Apache Spark is required. Familiarity with programming languages like Python, SQL, or Java, as well as real-time data processing frameworks like Kafka, Kinesis, or Azure Event Hub, will be beneficial. Moreover, experience in implementing data governance, data cataloging, and data quality frameworks is important. Knowledge of DevOps practices, CI/CD pipelines for data, and Infrastructure as Code (IaC) is a plus. Excellent problem-solving, communication, and stakeholder management skills are necessary for this role. A Bachelor's or Master's degree in Computer Science, Information Technology, or a related field is preferred, along with certifications like Cloud Architect or Data Architect (AWS/Azure/GCP). Join us at Infogain, a human-centered digital platform and software engineering company, where you will have the opportunity to work on cutting-edge data and AI projects in a collaborative and inclusive work environment. Experience competitive compensation and benefits while contributing to experience-led transformation for our clients in various industries.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

Are you passionate about bringing systems to life Do you excel in problem-solving and finding innovative solutions Are you eager to take the lead in enhancing digital client onboarding processes through technology to drive revenue, profitability, and NPS for the firm We are seeking a Senior Java Engineer to join our team and contribute to the development and delivery of an enterprise digital Client due diligence platform, which includes Initial Due Diligence (KYC/AML) and Periodic due diligence (PKR) platforms. Your primary responsibilities will include: - Designing, developing, implementing, and managing technology platforms within the client onboarding/lifecycle management domains - Utilizing a wide range of full-stack development, security, reliability, and integration technologies on the Azure platform to ensure the delivery of a robust and scalable platform - Integrating with various systems such as advisory workstations, client due-diligence systems, case management systems, data/document management, and workflow management to provide a seamless experience for clients and advisors - Managing the technical roadmap of the platform to continuously evaluate and onboard modern capabilities that deliver business value - Establishing and nurturing partnerships with cross-functional teams, including banking and wealth management businesses, risk/regulatory/compliance, records management, cloud infrastructure, security, and architecture office to align the platform with the firm's requirements As part of your role, you will be responsible for platforms and projects related to Periodic KYC Reviews, working in Pune, India, and collaborating with team members across the US, Poland, and India. Key Requirements: - 8+ years of experience in Java/J2EE, React JS, Kafka, REST APIs, microservices, and event-driven architecture - Strong hands-on expertise in designing, developing, and delivering large, scalable, and distributed systems - Familiarity with application frameworks like Spring Boot & Micronaut - Good understanding of cloud technologies, particularly Docker, Kubernetes, and other cloud-native services, preferably in Microsoft Azure - Knowledge of Azure Data Factory (ADF), Azure Data Lake (ADLS), and Databricks is a plus - Proficiency in SQL and data analysis, with experience in NoSQL databases like CosmosDB or MongoDB being advantageous - Solid UNIX/Linux experience and scripting skills, including scheduling/automation tools like AutoSys or TWS - Excellent communication skills, a team player, and experience working in Agile development processes - Strong problem-solving and debugging skills, with the ability to lead a team across geographies and manage a business-critical platform - Familiarity with the finance industry and service provider culture Join us at UBS, the world's largest global wealth manager, and be part of a diverse, inclusive, and dynamic team that values collaboration and innovation. If you are ready to make an impact and be part of #teamUBS, apply now and explore opportunities for professional growth and development.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

We are seeking an experienced and hands-on Senior Azure Data Engineer with Power BI expertise to take on a dual role that combines technical leadership and active development. You will lead BI and data engineering efforts for enterprise-grade analytics solutions using Power BI, Azure Data Services, and Databricks, contributing to both design and delivery. This position offers the opportunity to work in a collaborative environment and play a key role in shaping our cloud-first data architecture. Key Responsibilities Lead and participate in the design, development, and deployment of scalable BI and data engineering solutions. Build robust data pipelines and integrations using Azure Data Factory (ADF) and Databricks (Python/Scala). Develop interactive and insightful Power BI dashboards and reports, with strong expertise in DAX and Power Query. Collaborate with stakeholders to gather business requirements and translate them into technical solutions. Optimize data models and ensure best practices in data governance, security, and performance tuning. Manage and enhance data warehousing solutions, ensuring data consistency, availability, and reliability. Work in an Agile/Scrum environment with product owners, data analysts, and engineers. Mentor junior developers and ensure code quality and development standards are maintained. Technical Must-Haves Power BI : 3+ years of hands-on experience in developing complex dashboards and DAX queries. Databricks (Python/Scala) : 3 to 4+ years of experience in building scalable data engineering solutions. Azure Data Factory (ADF) : Strong experience in orchestrating and automating data workflows. SQL : Advanced skills in writing and optimizing queries for data extraction and transformation. Solid understanding of data warehousing concepts, star/snowflake schemas, and ETL/ELT practices. Nice To Have Knowledge of Azure Synapse Analytics, Azure Functions, or Logic Apps. Experience with CI/CD pipelines for data deployments. Familiarity with Azure DevOps, Git, or similar version control systems. Exposure to data lake architecture, Delta Lake, or medallion architecture (ref:hirist.tech)

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Machine Learning Engineer at our company, you will be responsible for developing and implementing various ML libraries and building data pipelines to address a range of challenges. Your role will require a deep understanding of Machine Learning principles, along with proficiency in utilizing available tools and libraries. Moreover, you should possess expert knowledge of Big Data technologies, with a particular focus on Python and/or Databricks. Additionally, your experience in natural language processing (NLP) will be highly valuable in this role. Ideal candidates will have a background as a software engineer or data scientist, demonstrating strong coding skills in Python and/or R. Familiarity with Linux and AWS is essential, while experience with Kubernetes would be considered a significant advantage. Furthermore, you should be able to showcase your ability to construct neural networking models for natural language applications. Proficiency in utilizing Jupyter notebooks for research and data analytics is also a key requirement for this position. Experience: 2 to 2+ Years Positions: 2 Location: Indore/Chennai, IN To apply for this position, please send your application to jobs@bitcot.com / hr@bitcot.com or contact us at (+91) 8103678419. Join us on this exciting journey of leveraging Machine Learning to drive innovation and solve complex problems.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are seeking an experienced Python + Databricks Developer to join our data engineering team. The ideal candidate will have a strong background in Python programming, data processing, and hands-on experience with Databricks for building and optimizing data pipelines. Key Responsibilities Design, develop, and maintain scalable data pipelines using Databricks and Apache Spark. Write efficient Python code for data transformation, cleansing, and analytics. Collaborate with data scientists, analysts, and engineers to understand data needs and deliver high-performance solutions. Optimize and tune data pipelines for performance and cost efficiency. Implement data validation, quality checks, and monitoring. Work with cloud platforms (preferably Azure or AWS) to manage data workflows. Ensure best practices in code quality, version control, and documentation. Required Skills & Experience 5+ years of professional experience in Python development. 3+ years of hands-on experience with Databricks (including notebooks, clusters, Delta Lake, and job orchestration). Strong experience with Spark (PySpark preferred). Proficient in working with large-scale data processing and ETL/ELT pipelines. Solid understanding of data warehousing concepts and SQL. Experience with Azure Data Factory, AWS Glue, or other data orchestration tools is a plus. Familiarity with version control tools like Git. Excellent problem-solving and communication skills. (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

The role of a member in the RSD EU Data team at UST involves being part of a team of Data Engineers and Report Developers. As a team member, your responsibilities will include delivering data pipelines on EDP, developing PowerBI reports, gathering and analyzing business requirements, transitioning the legacy warehouse setup to EDP, and providing production support for CCG data initiatives in the EU. To excel in this role, you should possess strong SQL skills and a Data mindset, along with prior experience in major DW implementations. The project will require you to write complex SQL queries and learn a SQL-heavy framework. Effective communication skills are essential for this role to clearly explain technical details. In this dynamic project environment, you will be expected to wear multiple hats, taking on roles such as Data Engineer, Business Analyst, and Data Analyst as per the project's requirements. It is crucial to be open to learning new technologies and have the ability to grasp the bigger picture while focusing on your assigned tasks. Extensive experience in data and semantic modeling, as well as strong analytical and problem-solving skills, are key requirements for this position. The job location for this role includes Bangalore, Chennai, Hyderabad, Pune, Kolkata, Kochi, Trivandrum, and Noida. The essential skills for this role include proficiency in SQL, PowerBI, and Databricks. UST is a global digital transformation solutions provider that has been partnering with leading companies worldwide for over 20 years to drive real impact through transformation. With a team of over 30,000 employees across 30 countries, UST is committed to embedding innovation and agility into their clients" organizations, touching billions of lives in the process.,

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

You will be based in either Bengaluru or Gurugram office as a part of the Growth, Marketing & Sales solutions team, primarily aligned with Periscope's technology team. Periscope By McKinsey enables better commercial decisions through actionable insights provided by its platform that combines intellectual property, prescriptive analytics, and cloud-based tools. With a presence in 26 locations across 16 countries, Periscope has a team of 1000+ business and IT professionals supported by a network of 300+ experts. Your responsibilities as a Technology Architect will include shaping and implementing strategic products, leading software development teams, providing thought leadership for product portfolio direction, and managing and evolving architectures and product designs. You will be actively involved in leading complex software development teams, prototyping code, facilitating user story breakdowns, and managing the code delivery process. Your expertise will expand into Cloud technologies, DevOps, and continuous delivery domains. As an active learner, you will identify new ways to deliver impact with people and technology, developing a growth mindset and embracing opportunities to work with various technologies. You will possess a strong understanding of agile engineering practices to guide teams on improvement opportunities and lead the adoption of technical standards and best practices. Additionally, you will provide coaching and mentoring to technical leads and developers to nurture high-performing teams. Qualifications: - Bachelor's degree in computer science or equivalent area; master's degree is a plus - 12+ years of experience in software development - 5+ years of experience in architecting SaaS/Web-based customer-facing products and leading engineering teams - Hands-on experience in designing and building data intrinsic products - Proficiency in multiple programming languages and frameworks, with in-depth experience in Scala, Go-Lang, or Java - Experience with Big Data processing technologies like Spark or Databricks - Knowledge of document stores like Elasticsearch and relational databases like PostgreSQL - Familiarity with container technologies like Docker and Kubernetes - Expertise in engineering practices such as code refactoring, microservices, design patterns, test-driven development, continuous integration, and application security - Strong cloud infrastructure experience with Azure - Experience in building event-driven systems and working with message queues/topics - Knowledge of Agile software development process.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a skilled professional, you must possess hands-on experience in Databricks, with a strong emphasis on DWH development. Your proficiency in Pyspark and architecture, coupled with robust SQL and PLSQL knowledge, will be advantageous. Your responsibilities in this role include developing Databricks DWH, showcasing your expertise in all modules of the Databricks suite, and contributing to framework and pipeline design. Your ability to create complex queries and packages using SQL and PLSQL will be crucial. To excel in this position, you should be familiar with Agile and DevOps methodologies, possess excellent attention to detail, and thrive in a collaborative team environment. Meeting deliverables within short sprints and demonstrating strong communication and documentation skills are essential for success. Key Skills: PLSQL, framework development, Pyspark, architecture, DWH, Agile methodologies, SQL, design, Databricks, DevOps, pipeline design, communication skills, documentation skills.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be responsible for designing, developing, and implementing data-centric software solutions using various technologies. This includes conducting code reviews, recommending best coding practices, and providing effort estimates for the proposed solutions. Additionally, you will design audit business-centric software solutions and maintain comprehensive documentation for all proposed solutions. As a key member of the team, you will lead architect and design efforts for product development and application development for relevant use cases. You will provide guidance and support to team members and clients, implementing best practices of data engineering and architectural solution design, development, testing, and documentation. Your role will require you to participate in team meetings, brainstorming sessions, and project planning activities. It is essential to stay up-to-date with the latest advancements in the data engineering area to drive innovation and maintain a competitive edge. You will stay hands-on with the design, development, and validation of systems and models deployed. Collaboration with audit professionals to understand business, regulatory, and risk requirements, as well as key alignment considerations for audit, is a crucial aspect of the role. Driving efforts in the data engineering and architecture practice area will be a key responsibility. In terms of mandatory technical and functional skills, you should have a deep understanding of RDBMS (MS SQL Server, ORACLE, etc.), strong programming skills in T-SQL, and proven experience in ETL and reporting (MSBI stack/COGNOS/INFORMATICA, etc.). Additionally, experience with cloud-centric databases (AZURE SQL/AWS RDS), ADF (AZURE Data Factory), data warehousing skills using SYNAPSE/Redshift, understanding and implementation experience of datalakes, and experience in large data processing/ingestion using Databricks APIs, Lakehouse, etc., are required. Knowledge in MPP databases like SnowFlake/Postgres-XL is also essential. Preferred technical and functional skills include understanding financial accounting, experience with NoSQL using MONGODB/COSMOS, Python coding experience, and an aptitude towards emerging data platforms technologies like MS AZURE Fabric. Key behavioral attributes required for this role include strong analytical, problem-solving, and critical-thinking skills, excellent collaboration skills, the ability to work effectively in a team-oriented environment, excellent written and verbal communication skills, and the willingness to learn new technologies and work on them.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

kolkata, west bengal

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. We are counting on your unique voice and perspective to help EY become even better. Join us and build an exceptional experience for yourself, and a better working world for all. We are looking for an experienced Data Engineer with expertise in Databricks and Python scripting to enhance our ETL (Extract, Transform, Load) processes. The ideal candidate will have a proven track record of developing and optimizing data pipelines, implementing data solutions, and contributing to the overall data architecture. Key Responsibilities: - Design, build, and maintain scalable and efficient data pipelines using Databricks and Python. - Develop ETL processes that ingest and transform data from various sources into a structured and usable format. - Collaborate with cross-functional teams to gather requirements and deliver data engineering solutions that support business objectives. - Write and optimize Python scripts for data extraction, transformation, and loading tasks. - Ensure data quality and integrity by implementing best practices and standards for data engineering. - Monitor and troubleshoot ETL processes, performing root cause analysis and implementing fixes to improve performance and reliability. - Document data engineering processes, creating clear and concise technical documentation for data pipelines and architectures. - Stay current with industry trends and advancements in data engineering technologies and methodologies. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 2 years of experience in data engineering, with a focus on Databricks and Python scripting for ETL implementation. - Strong understanding of data warehousing concepts and experience with SQL and NoSQL databases. - Proficiency in Python and familiarity with data engineering libraries and frameworks. - Experience with cloud platforms (e.g., AWS, Azure) and big data technologies is a plus. - Excellent problem-solving skills and attention to detail. - Ability to work independently and as part of a collaborative team. Working Conditions: You will work in an innovative and dynamic environment with a strong emphasis on delivering high-quality data solutions. You will have the opportunity to work with a diverse team of data professionals and contribute to impactful projects. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You are a global information technology, consulting, and business process services company headquartered in India, offering a wide range of services such as IT consulting, application development, business process outsourcing, and digital solutions to clients across diverse industries in over 167 countries. Your focus is on providing technology-driven solutions to enhance efficiency and innovation, contributing significantly to the digital transformation of businesses worldwide. You are looking for a Software Engineer with 5-8 years of experience, who is proficient in Scala programming, and has hands-on experience working in a globally distributed team. It is essential for the candidate to have experience with big-data technologies like Spark/Databricks and Hadoop/ADLS. Additionally, experience in cloud platforms such as Azure (Preferred), AWS, or Google is required, along with expertise in building data lakes and data pipelines using Azure, Databricks, or similar tools. The ideal candidate should be familiar with the development life cycle, including CI/CD pipelines, and have experience in Business Intelligence project development, preferably with Microsoft SQL Server BI stack (SSAS/SSIS). Data warehousing knowledge, designing Star schema/Snowflaking, working in source-controlled environments like GitHub, Subversion, and agile methodology are also necessary skills. You should have a proven track record of producing secure and clean code, strong analytical and problem-solving skills, and the ability to work effectively within the complexity and boundaries of a leading global organization. Being a flexible and resilient team player with strong interpersonal skills, taking initiative to drive projects forward, is highly valued. While experience in financial services is a plus, it is not required. Fluency in English is a must for this role. Interested candidates can respond by submitting their updated resumes. For more job opportunities, please visit Jobs In India - VARITE. If you are not available or interested in this position, you are encouraged to refer potential candidates who might be a good fit. VARITE offers a Candidate Referral program where you can earn a one-time referral bonus based on the experience level of the referred candidate upon completion of a three-month assignment with VARITE. VARITE is a global staffing and IT consulting company that provides technical consulting and team augmentation services to Fortune 500 Companies in the USA, UK, Canada, and India. You are a primary and direct vendor to leading corporations in various verticals, including Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. VARITE is an Equal Opportunity Employer committed to creating a diverse and inclusive workplace environment.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Supply Chain Data Integration Consultant Senior The opportunity We're looking for Senior Level Consultants with expertise in Data Modelling, Data Integration, Data Manipulation, and analysis to join the Supply Chain Technology group of our GDS consulting Team. This is a fantastic opportunity to be part of a leading firm while being instrumental in the growth of a new service offering. This role demands a highly technical, extremely hands-on Data Warehouse Modelling consultant who will work closely with our EY Partners and external clients to develop new business as well as drive other initiatives on different business needs. The ideal candidate must have a good understanding of the value of data warehouse and ETL with Supply Chain industry knowledge and proven experience in delivering solutions to different lines of business and technical leadership. Your key responsibilities A minimum of 5+ years of experience in BI/Data integration/ETL/DWH solutions in cloud and on-premises platforms such as Informatica/PC/IICS/Alteryx/Talend/Azure Data Factory (ADF)/SSIS/SSAS/SSRS and experience on any reporting tool like Power BI, Tableau, OBIEE, etc. Performing Data Analysis and Data Manipulation as per client requirements. Expert in Data Modelling to simplify business concepts. Create extensive ER Diagrams to help business in decision-making. Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using data integration technologies. Should be able to develop sophisticated workflows & macros (Batch, Iterative, etc.) in Alteryx with enterprise data. Design and develop ETL workflows and datasets in Alteryx to be used by the BI Reporting tool. Perform end-to-end Data validation to maintain the accuracy of data sets. Support client needs by developing SSIS Packages in Visual Studio (version 2012 or higher) or Azure Data Factory (Extensive hands-on experience implementing data migration and data processing using Azure Data Factory). Support client needs by delivering Various Integrations with third-party applications. Experience in pulling data from a variety of data source types using appropriate connection managers as per Client needs. Develop, Customize, Deploy, maintain SSIS packages as per client business requirements. Should have thorough knowledge in creating dynamic packages in Visual Studio with multiple concepts such as - reading multiple files, Error handling, Archiving, Configuration creation, Package Deployment, etc. Experience working with clients throughout various parts of the implementation lifecycle. Proactive with a Solution-oriented mindset, ready to learn new technologies for Client requirements. Analyzing and translating business needs into long-term solution data models. Evaluating existing Data Warehouses or Systems. Strong knowledge of database structure systems and data mining. Skills and attributes for success Deliver large/medium DWH programs, demonstrate expert core consulting skills and an advanced level of Informatica, SQL, PL/SQL, Alteryx, ADF, SSIS, Snowflake, Databricks knowledge, and industry expertise to support delivery to clients. Demonstrate management and an ability to lead projects or teams individually. Experience in team management, communication, and presentation. To qualify for the role, you must have 5+ years ETL experience as Lead/Architect. Expertise in the ETL Mappings, Data Warehouse concepts. Should be able to design a Data Warehouse and present solutions as per client needs. Thorough knowledge in Structured Query Language (SQL) and experience working on SQL Server. Experience in SQL tuning and optimization using explain plan and SQL trace files. Should have experience in developing SSIS Batch Jobs Deployment, Scheduling Jobs, etc. Building Alteryx workflows for data integration, modeling, optimization, and data quality. Knowledge of Azure components like ADF, Azure Data Lake, and Azure SQL DB. Knowledge of data modeling and ETL design. Design and develop complex mappings, Process Flows, and ETL scripts. In-depth experience in designing the database and data modeling. Ideally, you'll also have Strong knowledge of ELT/ETL concepts, design, and coding. Expertise in data handling to resolve any data issues as per client needs. Experience in designing and developing DB objects such as Tables, Views, Indexes, Materialized Views, and Analytical functions. Experience of creating complex SQL queries for retrieving, manipulating, checking, and migrating complex datasets in DB. Experience in SQL tuning and optimization using explain plan and SQL trace files. Candidates ideally should have ideally good knowledge of ETL technologies/tools such as Alteryx, SSAS, SSRS, Azure Analysis Services, Azure Power Apps. Good verbal and written communication in English, Strong interpersonal, analytical, and problem-solving abilities. Experience of interacting with customers in understanding business requirement documents and translating them into ETL specifications and High- and Low-level design documents. Candidates having additional knowledge of BI tools such as PowerBi, Tableau, etc will be preferred. Experience with Cloud databases and multiple ETL tools. What we look for The incumbent should be able to drive ETL Infrastructure related developments. Additional knowledge of complex source system data structures preferably in Financial services (preferred) Industry and reporting related developments will be an advantage. An opportunity to be a part of market-leading, multi-disciplinary team of 10000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY GDS consulting practices globally with leading businesses across a range of industries. What working at EY offers At EY, we're dedicated to helping our clients, from startups to Fortune 500 companies, and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching, and feedback from some of the most engaging colleagues around. Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that's right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About the role: Lead Analytics Engineer will provide technical expertise in designing and building Modern Data warehouse in Azure Cloud to meet the data needs for various BU in Gartner. You will be part of the Ingestion Team to bring data from multiple sources into the Data warehouse. Collaborate with Dashboard, Analytic & Business Team to build end to end scalable data pipelines. What you will do: Responsible for reviewing and analysis of business requirements and design technical mapping document Build new ETL pipelines using Azure Data Factory and Synapse Design, build, and automate data pipelines and applications to support data scientists and business users with their reporting and analytics needs Collaborate on Data warehouse architecture and technical design discussions Perform and participate in code reviews, peer inspections and technical design and specifications, as well as document and review detailed designs. Provide status reports to the higher management. Help build defining best practices & processes. Maintain Service Levels and department goals for problem resolution. Design and build tabular data models in Azure Analysis Services for seamless integration with Power BI Write efficient SQL queries and DAX (Data Analysis Expressions) to support robust data models, reports, and dashboards Tune and optimize data models and queries for maximum performance and efficient data retrieval. What you will need: 6-8 years experience in Data warehouse design & development Experience in ETL using Azure Data Factory (ADF) Experience in writing complex TSQL procedures in Synapse / SQL Data warehouse. Experience in analyzing complex code and performance tune pipelines. Good knowledge of Azure cloud technology and exposure in Azure cloud components Good understanding of business process and analyzing underlying data Understanding of dimensional and relational modeling Nice to Have: Experience with version control systems (e.g., Git, Subversion) Power BI and AAS Experience for Tabular model design. Experience with Data Intelligence platforms like Databricks Who you are: Effective time management skills and ability to meet deadlines Excellent communications skills interacting with technical and business audience’s Excellent organization, multitasking, and prioritization skills Must possess a willingness and aptitude to embrace new technologies/ideas and master concepts rapidly. Intellectual curiosity, passion for technology and keeping up with new trends Delivering project work on-time within budget with high quality Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:101545 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About the role: Lead Analytics Engineer will provide technical expertise in designing and building Modern Data warehouse in Azure Cloud to meet the data needs for various BU in Gartner. You will be part of the Ingestion Team to bring data from multiple sources into the Data warehouse. Collaborate with Dashboard, Analytic & Business Team to build end to end scalable data pipelines. What you will do: Responsible for reviewing and analysis of business requirements and design technical mapping document Build new ETL pipelines using Azure Data Factory and Synapse Design, build, and automate data pipelines and applications to support data scientists and business users with their reporting and analytics needs Collaborate on Data warehouse architecture and technical design discussions Perform and participate in code reviews, peer inspections and technical design and specifications, as well as document and review detailed designs. Provide status reports to the higher management. Help build defining best practices & processes. Maintain Service Levels and department goals for problem resolution. Design and build tabular data models in Azure Analysis Services for seamless integration with Power BI Write efficient SQL queries and DAX (Data Analysis Expressions) to support robust data models, reports, and dashboards Tune and optimize data models and queries for maximum performance and efficient data retrieval. What you will need: 6-8 years experience in Data warehouse design & development Experience in ETL using Azure Data Factory (ADF) Experience in writing complex TSQL procedures in Synapse / SQL Data warehouse. Experience in analyzing complex code and performance tune pipelines. Good knowledge of Azure cloud technology and exposure in Azure cloud components Good understanding of business process and analyzing underlying data Understanding of dimensional and relational modeling Nice to Have: Experience with version control systems (e.g., Git, Subversion) Power BI and AAS Experience for Tabular model design. Experience with Data Intelligence platforms like Databricks Who you are: Effective time management skills and ability to meet deadlines Excellent communications skills interacting with technical and business audience’s Excellent organization, multitasking, and prioritization skills Must possess a willingness and aptitude to embrace new technologies/ideas and master concepts rapidly. Intellectual curiosity, passion for technology and keeping up with new trends Delivering project work on-time within budget with high quality Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:101545 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

The Associate Manager, Compliance Analytics supports the development and implementation of compliance analytics across the global compliance organization. You will be responsible for designing and developing analytical solutions, generating insights, and fostering a culture of ethics and integrity. Your role will involve transforming data from multiple systems, designing the analytics approach, and executing the analytics. Your responsibilities will include designing, developing, and maintaining analytics to provide business insights, enabling risk-intelligent decisions, informing compliance risk assessment processes, and supporting remediation activities. You will be involved in scoping, gathering requirements, developing analytics models, validating and testing models, and communicating results to stakeholders for analytics projects. Additionally, you will support the execution of analytics initiatives to enhance the Global Compliance program and provide actionable insights through data and analysis to achieve business objectives. Furthermore, you will contribute to the development of standardized analytics processes and frameworks, including documentation and validation of work. You will also be responsible for setting up sustainable analytics solutions, including data pipelines and automated refresh schedules as necessary. Collaboration with compliance officers, IT, and stakeholders to understand business objectives and provide reliable and accurate reports, insights, and analysis to inform decision-making is an essential aspect of this role. As part of your duties, you will lead a culture of continuous improvement by enhancing existing databases, data collection methods, statistical methods, technology, procedures, and training. You will partner with data custodians and process experts to ensure the quality of data and definitions to support the building of reliable data models and analysis. Additionally, coaching and developing junior team members will be a key component of this role. To qualify for this position, you should have 8+ years of relevant work experience in Python, Advanced SQL, R, Azure, Databricks, PySpark, and PowerBI. A strong knowledge and experience in advanced analytics tools and languages to analyze large data sets from multiple sources are required. A BTech in Computer Science, IT, MSc in Mathematics/Statistics, or equivalent courses from an accredited university is necessary. You should possess a strong understanding of algorithms, mathematical models, statistical techniques, data mining, and experience implementing statistical and machine learning models. Experience with analyzing accounting and other financial data, as well as demonstrating excellent ability to exercise discretion and maintain confidentiality, are essential. Knowledge and experience with data transformation and cleansing, data modeling, and database concepts are also preferred.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As a Data Engineer, you will be responsible for designing and implementing data ingestion pipelines using Databricks and PySpark, focusing on Autoloader to ensure efficient data processing. You will also be tasked with developing and maintaining processes for handling complex nested JSON files, guaranteeing data integrity and accessibility. Integration and management of data from various APIs to ensure seamless data flow and consistency will be a key part of your role. Creating and optimizing data models to support analytics and reporting needs, along with optimizing data processing and storage solutions for performance and cost-efficiency, will be essential tasks. Collaboration with data scientists, analysts, and stakeholders to understand data requirements and provide effective solutions is a crucial aspect of this position. Your responsibilities will also include ensuring the accuracy, integrity, and security of data throughout its lifecycle. To excel in this role, you should have proficiency in Databricks, PySpark, and SQL, with strong experience in Autoloader and handling nested JSON files. Demonstrated experience in API integration, strong problem-solving skills, and excellent communication abilities for effective collaboration with cross-functional teams are also required. Ideally, you should have 3-5 years of experience in data engineering, data integration, and data modeling. A degree in Computer Science, Engineering, or a related field is preferred. Additionally, experience with cloud platforms like AWS, Azure, or Google Cloud, familiarity with data warehousing concepts and tools, and knowledge of data governance and security best practices would be advantageous for this role.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

About Mindstix Software Labs: Mindstix accelerates digital transformation for the world's leading brands. We are a team of passionate innovators specialized in Cloud Engineering, DevOps, Data Science, and Digital Experiences. Our UX studio and modern-stack engineers deliver world-class products for our global customers that include Fortune 500 Enterprises and Silicon Valley startups. Our work impacts a diverse set of industries - eCommerce, Luxury Retail, ISV and SaaS, Consumer Tech, and Hospitality. A fast-moving open culture powered by curiosity and craftsmanship. A team committed to bold thinking and innovation at the very intersection of business, technology, and design. That's our DNA. Roles and Responsibilities: Mindstix is looking for a proficient Data Engineer. You are a collaborative person who takes pleasure in finding solutions to issues that add to the bottom line. You appreciate technical work by hand and feel a sense of ownership. You require a keen eye for detail, work experience as a data analyst, and in-depth knowledge of widely used databases and technologies for data analysis. Your responsibilities include: - Building outstanding domain-focused data solutions with internal teams, business analysts, and stakeholders. - Applying data engineering practices and standards to develop robust and maintainable solutions. - Being motivated by a fast-paced, service-oriented environment and interacting directly with clients on new features for future product releases. - Being a natural problem-solver and intellectually curious across a breadth of industries and topics. - Being acquainted with different aspects of Data Management like Data Strategy, Architecture, Governance, Data Quality, Integrity & Data Integration. - Being extremely well-versed in designing incremental and full data load techniques. Qualifications and Skills: - Bachelors or Master's degree in Computer Science, Information Technology, or allied streams. - 2+ years of hands-on experience in the data engineering domain with DWH development. - Must have experience with end-to-end data warehouse implementation on Azure or GCP. - Must have SQL and PL/SQL skills, implementing complex queries and stored procedures. - Solid understanding of DWH concepts such as OLAP, ETL/ELT, RBAC, Data Modelling, Data Driven Pipelines, Virtual Warehousing, and MPP. - Expertise in Databricks - Structured Streaming, Lakehouse Architecture, DLT, Data Modeling, Vacuum, Time Travel, Security, Monitoring, Dashboards, DBSQL, and Unit Testing. - Expertise in Snowflake - Monitoring, RBACs, Virtual Warehousing, Query Performance Tuning, and Time Travel. - Understanding of Apache Spark, Airflow, Hudi, Iceberg, Nessie, NiFi, Luigi, and Arrow (Good to have). - Strong foundations in computer science, data structures, algorithms, and programming logic. - Excellent logical reasoning and data interpretation capability. - Ability to interpret business requirements accurately. - Exposure to work with multicultural international customers. - Experience in the Retail/ Supply Chain/ CPG/ EComm/Health Industry is a plus. Who Fits Best - You are a data enthusiast and problem solver. - You are a self-motivated and fast learner with a strong sense of ownership and drive. - You enjoy working in a fast-paced creative environment. - You appreciate great design, have a strong sense of aesthetics and have a keen eye for detail. - You thrive in a customer-centric environment with the ability to actively listen, empathize and collaborate with globally distributed teams. - You are a team player who desires to mentor and inspire others to do their best. - You love expressing ideas and articulating well with strong written and verbal English communication and presentation skills. - You are detail-oriented with an appreciation for craftsmanship. Benefits: - Flexible working environment. - Competitive compensation and perks. - Health insurance coverage. - Accelerated career paths. - Rewards and recognition. - Sponsored certifications. - Global customers. - Mentorship by industry leaders. Location: This position is primarily based at our Pune (India) headquarters, requiring all potential hires to work from this location. A modern workplace is deeply collaborative by nature, while also demanding a touch of flexibility. We embrace deep collaboration at our offices with reasonable flexi-timing and hybrid options to our seasoned team members. Equal Opportunity Employer.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Engineering Lead/Architect with 10+ years of experience, you will play a crucial role in architecting and designing data solutions to meet business requirements effectively. You will collaborate with cross-functional teams to design scalable and efficient data architectures, models, and integration strategies. Your technical leadership will be essential in implementing data pipelines, ETL processes, and data warehousing solutions. Your expertise in Snowflake technologies will be key in building and optimizing data warehouses. You will develop and maintain Snowflake data models and schemas, following best practices such as cost analysis, resource allocation, and security configurations. Additionally, you will leverage Azure cloud services and Databricks platforms to manage and process large datasets efficiently, building and maintaining data pipelines on Azure services. Implementing best practices for data warehousing, ensuring data quality, consistency, and reliability will be part of your responsibilities. You will create and manage data integration processes, including real-time and batch data movement between systems. Your mastery in writing complex SQL and PL/SQL queries will enable you to extract, transform, and load data effectively, optimizing SQL queries and database performance for high volume data processing. Continuous monitoring and enhancement of data pipelines and data storage systems will be crucial for performance tuning and optimization. You will troubleshoot and resolve data-related issues to minimize downtime while documenting data engineering processes, data flows, and architectural decisions. Collaboration with data scientists, analysts, and stakeholders is essential to ensure data availability and usability. Your role will also involve implementing data security measures and adhering to compliance standards to protect sensitive data. In addition to your technical skills, you will be expected to showcase abilities such as driving data engineering strategies, engaging in sales and proposal activities, developing customer relationships, leading a technical team, and mentoring other team members. You should be able to clarify and translate customer requirements into Epics/Stories, removing ambiguity and aligning others to your ideas/solutions. To qualify for this role, you should have a Bachelor's or Master's degree in computer science, Information Technology, or a related field, along with over 10 years of experience in Data Engineering with a strong focus on architecture. Your proven expertise in Snowflake, Azure, and Databricks technologies, comprehensive knowledge of data warehousing concepts, ETL processes, data integration techniques, and exceptional SQL and PL/SQL skills are essential. Certifications in relevant technologies like Snowflake and Azure will be a plus. Strong problem-solving skills, the ability to work in a fast-paced, collaborative environment, and excellent communication skills are also required to convey technical concepts to non-technical stakeholders effectively.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

The Data Platform team at Abnormal Security is seeking a Software Engineer to join their team. As a Software Engineer, you will play a crucial role in partnering with the existing team to define and build the next generation data platform. Your responsibilities will include building, scaling, and maintaining batch data platform offerings, optimizing the performance of offline data processing systems, and enhancing the reliability and security of offline data storage systems. To be successful in this role, you must have at least 4 years of experience working on data-intensive applications and distributed systems. Strong programming skills in Python, Golang, or similar languages are required, along with the ability to write clean, efficient, and testable code. Additionally, a depth in at least one key area of the data platform tech stack is essential, such as batch processing, streaming systems, data orchestration, or data infrastructure. Experience with AWS or similar public commercial clouds like Azure or GCP is also a must. Ideal candidates will have prior experience in up-leveling the use of Spark or similar frameworks in a tech startup setting to support growth and scale. Experience with Databricks and working in platform or similar teams serving global stakeholders is considered a plus. Joining the Data Platform team at Abnormal Security offers numerous opportunities for career advancement and growth. You will have the chance to solve complex problems, grow into senior/technical leadership roles, and position yourself as one of the founding engineers for a new team. Take ownership of your career and contribute to our mission of empowering the engineering team to stop cybercrime as we expand our offerings across different clouds and regions.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary MLOps Engineering Director Overview Horizontal Data Science Enablement Team within SSO Data Science is looking for a MLOps Engineering Director who can help solve MLOps problems, manage the Databricks platform for the entire organization, build CI/CD or automation pipelines, and lead best practices. Role And Responsibilities Oversee the administration, configuration, and maintenance of Databricks clusters and workspaces. Continuously monitor Databricks clusters for high workloads or excessive usage costs, and promptly alert relevant stakeholders to address issues impacting overall cluster health. Implement and manage security protocols, including access controls and data encryption, to safeguard sensitive information in adherence with Mastercard standards. Facilitate the integration of various data sources into Databricks, ensuring seamless data flow and consistency. Identify and resolve issues related to Databricks infrastructure, providing timely support to users and stakeholders. Work closely with data engineers, data scientists, and other stakeholders to support their data processing and analytics needs. Maintain comprehensive documentation of Databricks configurations, processes, and best practices and lead participation in security and architecture reviews of the infrastructure Bring MLOps expertise to the table, namely within the scope of, but not limited to: Model monitoring Feature catalog/store Model lineage maintenance CI/CD pipelines to gatekeep model lifecycle from development to production Own and maintain MLOps solutions either by leveraging open-sourced solutions or with a 3rd party vendor Build LLMOps pipelines using open-source solutions. Recommend alternatives and onboard products to the solution Maintain services once they are live by measuring and monitoring availability, latency and overall system health. Manage a small team of MLOps engineers All About You Master’s degree in computer science, software engineering, or a similar field. Strong experience with Databricks and its management of roles and resources Experience in cloud technologies and operations Experience supporting API’s and Cloud technologies Experience with MLOps solutions like MLFlow Experience with performing data analysis, data observability, data ingestion and data integration. 7+ Yrs DevOps, SRE, or general systems engineering experience. 5+ years of hands-on experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef. Experience architecting and implementing data governance processes and tooling (such as data catalogs, lineage tools, role-based access control, PII handling) Strong coding ability in Python or other languages like Java, and C++, plus a solid grasp of SQL fundamentals Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. What Could Set You Apart SQL tuning experience. Strong automation experience Strong Data Observability experience. Operations experience in supporting highly scalable systems. Ability to operate in a 24x7 environment encompassing global time zones Self-Motivating and creatively solves software problems and effectively keep the lights on for modeling systems. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-252407

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Data Engineer – C11/Officer (India) The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

10.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview PepsiCo Data BI & Integration Platforms is seeking an experienced Cloud Platform technology leader, responsible for overseeing the design, deployment, and maintenance of Enterprise Data Foundation cloud infrastructure initiative on Azure/AWS. The ideal candidate will have hands-on experience with AWS/GCP services - Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities Provide guidance and support for application migration, modernization, and transformation projects, leveraging cloud-native technologies and methodologies. Implement cloud infrastructure policies, standards, and best practices, ensuring cloud environment adherence to security and regulatory requirements. Design, deploy and optimize cloud-based infrastructure using AWS/GCP services that meet the performance, availability, scalability, and reliability needs of our applications and services. Drive troubleshooting of cloud infrastructure issues, ensuring timely resolution and root cause analysis by partnering with global cloud center of excellence & enterprise application teams, and PepsiCo premium cloud partners (AWS,GCP). Establish and maintain effective communication and collaboration with internal and external stakeholders, including business leaders, developers, customers, and vendors. Develop Infrastructure as Code (IaC) to automate provisioning and management of cloud resources. Write and maintain scripts for automation and deployment using PowerShell, Python, or GCP/AWS CLI. Work with stakeholders to document architectures, configurations, and best practices. Knowledge of cloud security principles around data protection, identity and access Management (IAM), compliance and regulatory, threat detection and prevention, disaster recovery and business continuity. Performance Tuning: Monitor performance, identify bottlenecks, and implement optimizations. Capacity Planning: Plan and manage cloud resources to ensure scalability and availability. Database Design and Development: Design, develop, and implement databases in Azure/AWS. Manage cloud platform operations with a focus on FinOps support, optimizing resource utilization, cost visibility, and governance across multi-cloud environments. Qualifications Bachelor’s degree in computer science. At least 10 to 12 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 8 years in a technical leadership role Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in AWS/GCP big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Strong understanding of cloud cost management, with hands-on experience in usage analytics, budgeting, and cost optimization strategies across multi-cloud platforms. Proficiency along with hands experience on google cloud integration tools, GCP platform, workspace administration, Apigee integration management, Security Saas tools, Big Query and other GA related tools. Deep expertise in AWS/GCP networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in AWS/GCP platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position Overview: ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation. Job Responsibilities Architect, design, and optimize big data and AI/ML solutions on the Databricks platform Develop and implement highly scalable ETL pipelines for processing large datasets Lead the adoption of Apache Spark for distributed data processing and real-time analytics Define and enforce data governance, security policies, and compliance standards Optimize data lakehouse architectures for performance, scalability, and cost-efficiency Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks Automate data workflows using CI/CD pipelines and infrastructure-as-code practices Ensure data integrity, quality, and reliability across all data processes Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field 8+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark Proficiency in SQL, Python, or Scala for data processing and analytics Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture Hands-on experience with CI/CD tools and DevOps best practices Familiarity with data security, compliance, and governance best practices Strong problem-solving and analytical skills in a fast-paced environment Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer) Hands-on experience with MLflow, Feature Store, or Databricks SQL Exposure to Kubernetes, Docker, and Terraform Experience with streaming data architectures (Kafka, Kinesis, etc.) Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker) Prior experience working with retail, e-commerce, or ad-tech data platforms We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position Overview: ShyftLabs is seeking a skilled Databricks Engineer to support in designing, developing, and optimizing big data solutions using the Databricks Unified Analytics Platform. This role requires strong expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to drive data-driven insights and ensure scalable, high-performance data architectures. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsiblities Design, implement, and optimize big data pipelines in Databricks Develop scalable ETL workflows to process large datasets Leverage Apache Spark for distributed data processing and real-time analytics Implement data governance, security policies, and compliance standards Optimize data lakehouse architectures for performance and cost-efficiency Collaborate with data scientists, analysts, and engineers to enable advanced AI/ML workflows Monitor and troubleshoot Databricks clusters, jobs, and performance bottlenecks Automate workflows using CI/CD pipelines and infrastructure-as-code practices Ensure data integrity, quality, and reliability in all pipelines Basic Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field 5+ years of hands-on experience with Databricks and Apache Spark Proficiency in SQL, Python, or Scala for data processing and analysis Experience with cloud platforms (AWS, Azure, or GCP) for data engineering Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture Experience with CI/CD tools and DevOps best practices Familiarity with data security, compliance, and governance best practices Strong problem-solving and analytical skills with an ability to work in a fast-paced environment Preferred Qualifications Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer) Hands-on experience with MLflow, Feature Store, or Databricks SQL Exposure to Kubernetes, Docker, and Terraform Experience with streaming data architectures (Kafka, Kinesis, etc.) Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker) Prior experience working with retail, e-commerce, or ad-tech data platforms We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies