Jobs
Interviews

5545 Databricks Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: · Requirement gathering and analysis · Design of data architecture and data model to ingest data · Experience with different databases like Synapse, SQL DB, Snowflake etc. · Design and implement data pipelines using Azure Data Factory, Databricks, Synapse · Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases · Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage · Implement data security and governance measures · Monitor and optimize data pipelines for performance and efficiency · Troubleshoot and resolve data engineering issues · Hands on experience on Azure functions and other components like realtime streaming etc · Oversee Azure billing processes, conducting analyses to ensure cost-effectiveness and efficiency in data operations. · Provide optimized solution for any problem related to data engineering · Ability to work with verity of sources like Relational DB, API, File System, Realtime streams, CDC etc. · Strong knowledge on Databricks, Delta tables Mandatory skill sets: SQL, ADF, ADLS, Synapse, Pyspark, Databricks, data modelling Preferred skill sets: Pyspark, Databricks Years of experience required: 7 – 10 yrs Education qualification: B.tech/MCA and MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Coaching and Feedback, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion {+ 21 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Job Description NIQ is looking for a Senior Data Engineer to join our Financial Services Engineering team. At NIQ, the Financial Services team uses alternative datasets to help global public equity investors (hedge funds, mutual funds, pension funds) make better investment decisions. We work with some of the largest hedge funds in the world. As a Senior Data Engineer, you will be at the cutting edge of the alternative data space where you will help maintain and improve our data infrastructure, which enables us to develop market research products and delivery data to our customers. In this role, you would also get the opportunity to work with world-class big data and cloud services, such as: AWS, Azure, Snowflake, Databricks, DBT, Airflow, and Looker. Apply now to start taking your career to the next level. Who we are looking for: You have a strong entrepreneurial spirit and a thirst to solve difficult challenges through innovation and creativity with a strong focus on results You have a passion for data and the insights it can deliver You are intellectually curious with a broad range of interests and hobbies You take ownership of your deliverables You have excellent analytical communication and interpersonal skills You have excellent communication skills with both technical and non-technical audiences You can work with distributed teams situated globally in different geographies You want to work in a small team with a start-up mentality You can work well under pressure, prioritize work and be well organized. Relish tackling new challenges, paying attention to details, and, ultimately, growing professionally Responsibilities: Develop robust data flows to connect and maintain large-scale data processing systems, data for analytics, and BI systems 5+ years of hands-on programming experience with Python and SQL including familiarity with stored procedures, Snowflake and dbt 5+ years of experience with PySpark to design, optimize and scale distributed data processing pipelines Experience working on data modeling Familiarity with a cloud provider (AWS, GCP, Azure) and their data infrastructure services (e.g., S3, EC2, etc.) Utilize programming languages like Python or JavaScript to build robust data pipelines and implement ETL processes Ensure data quality and accessibility for end-users. Recommend ways to improve data reliability, efficiency, and quality Collaborate with data scientists, business stakeholders, and IT team members on project goals Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 5+ years’ experience as a Data Engineer, Database Developer, or similar role Strong knowledge and experience with SQL Server, Stored Procedures, and OLAP BI tools Expertise in MDX or similar query languages for complex data analysis Experience in building and optimizing ‘big data’ pipelines, architectures, and data sets Strong organizational skills with an ability to manage multiple projects and priorities Knowledge of Databricks, Spark, Snowflake, or Airflow will be a plus Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Critical Skills To Possess Advanced working knowledge and experience with relational and non-relational databases. Advanced working knowledge and experience with API data providers Experience building and optimizing Big Data pipelines, architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines. Strong proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python, Scala, SQL, or similar languages. Strong experience in Azure Data Lake Storage Gen2, Azure Data Factory, Databricks, Event Hub, Azure Synapse. Familiarity with several of the following technologies: Event Hub, Docker, Azure Kubernetes Service, Azure DWH, API Azure, Azure Function, Power BI, Azure Cognitive Services. Azure DevOps experience to deploy the data pipelines through CI/CD. Preferred Qualifications BS degree in Computer Science or Engineering or equivalent experience

Posted 6 days ago

Apply

10.0 years

0 Lacs

India

Remote

Job description Job Title: Lead Data Engineer (Databricks Expert – with MS BI Stack Preferred) Experience Level: 10+ Years (with 5+ Years in Databricks) – Immediate Joiners Preferred Location: Bengaluru (Remote) Employment Type: Full-Time – Remote Job Description: We are seeking a Lead Data Engineer with deep hands-on expertise in Databricks to lead the development of scalable data processing pipelines, real-time analytics workflows, and enterprise-level data lakehouses . The ideal candidate will have end-to-end experience in building and optimizing complex data systems on the Databricks platform, and can independently lead technical initiatives, architect solutions, and mentor data teams. While Databricks expertise is mandatory, experience with Microsoft BI Stack (SSIS, SSRS, SSAS, SSMS) will be considered a strong advantage. Key Responsibilities: Lead the design and implementation of scalable, high-performance data pipelines using Databricks (Delta Lake, PySpark, SQL, MLflow). Define and drive data architecture, modeling, and governance strategies. Build and optimize ETL/ELT workflows and automate data transformation processes. Collaborate with analysts and data scientists to support advanced analytics and ML model integration. Ensure cost-effective, reliable, and high-performing data systems in a cloud-native environment. Translate business requirements into technical solutions with reusable, modular designs. Set and enforce best practices in code quality, CI/CD, testing, and observability for Databricks pipelines. Work with MS BI Stack (SSIS, SSRS, SSAS) to support enterprise reporting systems. Required Qualifications: 10+ years of overall experience in data engineering, with 5+ years of strong hands-on Databricks experience. Proven expertise in PySpark, Databricks SQL, and Delta Lake. Deep understanding of data lakehouses, distributed systems, and data warehousing. Strong experience with cloud platforms, preferably Azure. Proficient in Python and processing large structured/unstructured datasets. Track record of leading end-to-end Databricks projects, from ingestion to analytics. Strong experience with CI/CD, Git workflows, job orchestration, and monitoring. Exceptional problem-solving and performance optimization capabilities. Experience with Microsoft BI Stack (SSIS, SSRS, SSAS, SSMS). Familiarity with Power BI or similar data visualization tools. Awareness of data security, compliance, and governance frameworks. Exposure to Agile/Scrum practices and cross-functional team collaboration. Why Join Us? Opportunity to lead high-impact data initiatives using cutting-edge platforms like Databricks. Innovative and fast-paced culture with a focus on learning and growth. Access to certifications, learning resources, and mentoring opportunities. Remote work flexibility with supportive and transparent leadership. How to Apply: If you're a Databricks expert ready to take the lead in driving data engineering excellence, send your resume to 📩 careers@saradysol.com with the subject line: “Lead Data Engineer – Databricks (Remote)” You can also apply via LinkedIn’s Easy Apply feature. Let’s build the future of data together! 🚀

Posted 6 days ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Key Skills : PostgreSQL, Cron Jobs, Databricks, Azure, SSIS, Prefect, Data Pipelines, Cloud Data Migration, MSSQL. Roles and Responsibilities: Design and implement data models in PostgreSQL database on cloud environments. Build and manage transformation pipelines using Databricks for data migration from MSSQL to PostgreSQL. Schedule and manage automation using Cron jobs. Mentor and guide junior team members. Work in Azure or any cloud-based environment. Ensure successful and optimized data migration from MSSQL to PostgreSQL. Experience Requirement: 5-10 years of experience in database engineering and data migration. Hands-on experience in PostgreSQL, Cron jobs, Databricks, and Azure. Experience with data pipelines using SSIS or Prefect is preferred. Education: B.E., B.Tech.

Posted 6 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Data Quality Engineer JD Collaborate with product, engineering, and customer teams to gather requirements and develop a comprehensive data quality strategy Lead data governance processes, including data preparation, obfuscation, integration, slicing, and quality control Test data pipelines, ETL processes, APIs, and system performance to ensure reliability and accuracy Prepare test data sets, conduct data profiling, and perform benchmarking to identify inconsistencies or inefficiencies Create and implement strategies to verify the quality of data products and ensure alignment with business standards Set up data quality environments and applications in compliance with defined standards, contributing to CI/CD process improvements Participate in the design and maintenance of data platforms and build automation frameworks for data quality testing, including resolving potential issues Provide support in troubleshooting data-related issues, ensuring timely resolution Ensure all data quality processes and tools align with organizational goals and industry best practices Collaborate with stakeholders to enhance data platforms and optimize data quality workflows Requirements Bachelor’s degree in Computer Science or a related technical field involving coding, such as physics or mathematics At least three years of hands-on experience in Data Management, Data Quality verification, Data Governance, or Data Integration Strong understanding of data pipelines, Data Lakes, and ETL testing methodologies Proficiency in CI/CD principles and their application in data processing Comprehensive knowledge of SQL, including aggregation and window functions Experience in scripting with Python or similar programming languages Databricks and snowflake experience is must . Good exposure to notebook ,sql editor etc Experience in developing test automation frameworks for data quality assurance Familiarity with Big Data principles and their application in modern data systems Experience in data analysis and requirements validation, including gathering and interpreting business needs Experience in maintaining QA environments to ensure smooth testing and deployment processes Hands-on experience in Test Planning, Test Case design, and Test Result Reporting in data projects Strong analytical skills, with the ability to approach problems methodically and communicate solutions effectively English proficiency at B2 level or higher, with excellent verbal and written communication skills Nice to have Familiarity with advanced data visualization tools to enhance reporting and insights Experience in working with distributed data systems and frameworks like hadoop

Posted 6 days ago

Apply

15.0 years

0 Lacs

India

On-site

Job Summary As part of the leadership team for Data business, the role will be responsible for building and growing the Databricks capability within the organization. The role entails driving technical strategy, innovation, and solution delivery on the Databricks Unified Data Analytics platform. This leader will work closely with clients, delivery teams, technology partners, and internal stakeholders to define and deliver scalable, high-performance solutions using Databricks. 🔧 Key Responsibilities: Serve as the Subject Matter Expert (SME) in Databricks for internal teams and external clients. Lead the growth and maturity of the [technology] practice — define standards, processes, tools, and roadmaps. Collaborate with sales and solutioning teams to drive pre-sales activities — including client presentations, solution architecture, PoCs, and RFP responses. Act as a trusted advisor to clients, guiding them on architecture, best practices, and value realization. Build and mentor a team of engineers and consultants within the practice. Represent the company in thought leadership initiatives — webinars, blogs, conferences, etc. Work cross-functionally with delivery, engineering, and marketing teams to expand practice capabilities. Stay updated on product releases, industry trends, and customer use cases. ✅ Required Qualifications: 12–15 years of experience in Data Engineering, Analytics, or AI/ML , with 3–5 years of focused experience on Databricks Proficiency in architectural best practices in cloud around user management, data privacy, data security, performance and other non-functional requirements Proven experience delivering large-scale data and AI/ML workloads on Databricks Deep knowledge of Spark, Delta Lake, Python/Scala, SQL, and data pipeline orchestration Experience with MLOps, Feature Store, Unity Catalog, and model lifecycle management Certification preferred (e.g., Lakehouse Fundamentals, Data Engineer Associate/ Professional) Experience integrating Databricks with cloud platforms (Azure, AWS, or GCP) and BI tools Strong background in solution architecture, presales, and client advisory Excellent communication, stakeholder engagement, and leadership skills Exposure to data governance, security, compliance, and cost optimization in cloud analytics

Posted 6 days ago

Apply

12.0 years

0 Lacs

India

Remote

About us: Intuitive is an innovation-led engineering company delivering business outcomes for 100’s of Enterprises globally. With the reputation of being a Tiger Team & a Trusted Partner of enterprise technology leaders, we help solve the most complex Digital Transformation challenges across following Intuitive Superpowers: Modernization & Migration Application & Database Modernization Platform Engineering (IaC/EaC, DevSecOps & SRE) Cloud Native Engineering, Migration to Cloud, VMware Exit FinOps Data & AI/ML Data (Cloud Native / DataBricks / Snowflake) Machine Learning, AI/GenAI Cybersecurity Infrastructure Security Application Security Data Security AI/Model Security SDx & Digital Workspace (M365, G-suite) SDDC, SD-WAN, SDN, NetSec, Wireless/Mobility Email, Collaboration, Directory Services, Shared Files Services Intuitive Services: Professional and Advisory Services Elastic Engineering Services Managed Services Talent Acquisition & Platform Resell Services About the job: Title: Director- Demand Generation and ABM Start Date: Immediate Position Type: Full-time Employment Location : Remote across India Key Responsibilities Account-Based Marketing (ABM) Leadership Design and execute 1:1, 1:few, and 1:many ABM campaigns for strategic enterprise accounts in collaboration with Sales, Solution Engineering, and Practice teams. Design and launch personalized campaigns across digital, email, direct mail, and field channels. Leverage firmographic, technographic, and intent data for targeting and messaging. Coordinate with content, creative, and solutions teams to develop persona- and industry-specific assets. Use ABM platforms (e.g., 6sense, Demandbase,) to orchestrate and optimize campaigns. Integrate ABM with events, webinars, and executive programs to deepen engagement. Track and report on ABM performance, pipeline influence, and ROI using tools like HubSpot. Drive account expansion and retention strategies in collaboration with Customer Success. Collaborate with Sales and Growth teams to select target accounts, build plans, and align on KPIs. Continuously test, analyze, and improve campaign effectiveness and account journeys. Enable the Sales and SDR teams with ABM toolkits, including account playbooks, messaging templates, and insights from tools like Apollo, 6sense, and SalesLoft. Manage account engagement scoring and pipeline attribution to demonstrate ABM effectiveness and continuously refine targeting strategies. Strategic Demand Generation Design, implement, and measure full-funnel demand generation strategies that build awareness, generate qualified leads, and accelerate pipeline across priority verticals (e.g., Healthcare and life sciences, BFSI) Develop multi-channel integrated campaigns across paid media, SEM, content syndication, email, social, webinars, and virtual/in-person events to achieve quarterly pipeline and revenue goals. Collaborate with Sales, SDRs, and Partner Marketing to drive alignment on GTM goals, campaign messaging, ICP targeting, and funnel acceleration tactics. Continuously evaluate the performance and ROI of campaigns through A/B testing, attribution models, lead velocity, and other performance metrics to optimize programs in real time. Expand and nurture the marketing database to increase campaign reach, improve segmentation, and drive lead engagement across the buyer’s journey. Measurement, Analytics & Optimization Define and own KPIs for demand generation and ABM, including: MQLs and SQLs by channel and segment Marketing-sourced and influenced pipeline Account engagement metrics (reach, depth, influence) Lead-to-opportunity conversion rate Campaign-level ROI and CAC Partner with Revenue Operations to ensure accurate tracking, lead scoring, and attribution within HubSpot, and campaign analytics dashboards. Apply insights to scale high-performing programs and pivot away from underperforming tactics based on data-driven decisions. Team Leadership & Cross-Functional Collaboration Establish operating rhythms with Sales, Practice leaders, Marketing and Partner Alliances to align GTM motions and pipeline goals. Partner with Content, Creative, and Brand teams to ensure all demand and ABM programs are on-brand and deliver compelling narratives that resonate with target personas. Evaluate, onboard, and manage agency and vendor partners to scale execution capacity across regions and verticals. Qualifications: 12+ years of progressive B2B SaaS marketing experience, with a strong focus on demand generation, ABM, and revenue marketing. Proven success in building and scaling multi-channel demand generation and ABM programs that drive enterprise growth and measurable pipeline. Hands-on expertise with marketing and sales tech stacks, including HubSpot, Salesforce, 6sense, LinkedIn Ads, Google Ads, Drift/Chat, and MAP/CRM/ABM tools. Deep understanding of full-funnel marketing, campaign orchestration, lead lifecycle, and pipeline attribution. Strong leadership, strategic thinking, and project management skills with a collaborative, cross-functional mindset. Exceptional analytical, communication, and stakeholder alignment abilities. Bachelor’s degree in Marketing, Business, or related field; MBA or equivalent advanced degree is strongly preferred.

Posted 6 days ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Sr. Data Engineer About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job Summary Position: Sr Consultant Location: Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band: M3/M4 (8 to 14 years) Role Description Job Title: Senior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement “strategic” or “optimal” solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO? You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We Offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients

Posted 6 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description About Grab and Our Workplace Grab is Southeast Asia's leading superapp. From getting your favourite meals delivered to helping you manage your finances and getting around town hassle-free, we've got your back with everything. In Grab, purpose gives us joy and habits build excellence, while harnessing the power of Technology and AI to deliver the mission of driving Southeast Asia forward by economically empowering everyone, with heart, hunger, honour, and humility. Job Description Get to Know the Team GrabFin Analytics team supports the Fintech product, business and risk orgs. Product Analyst are part of one or more tech families - Fin Core, Fin Experience, Fin Identity, Payments, Financial Services (Invest, Insure & Lending). This role is parked in the Payments tech family We care about understanding how users experience the Product and partner with Business, Product, Design, and Tech to focus on the right outcomes and feature set We remain necessary to product development from understanding user journeys with UX designers to hypothesis development, right through to post rollout optimization. Get to Know the Role Use data to understand user needs - be the de-facto Voice of the Customer (users, driver-partners, merchants, agents, etc) for all GFG teams. Leverage data for further insights to improve decision-making at Grab by developing dashboards, maintaining pipelines, holding metrics reviews and coming up with insight decks/experiments Champion data-driven decision-making and culture in Grab Financial GroupPartner with Product Managers, Business Owners, UX Designers,Risk and Engineering to design and deliver analytical projects that help support the GFG Product roadmap Provide thought leadership and generate data-driven hypotheses to solve key Product and Business problems. You will be reporting to "Senior Analytics Manager". This is a Hybrid role based in Bangalore (3 days Work from Office every week). The Critical Tasks You Will Perform Support business critical dashboard and pipeline maintenance for day-to-day data driven decisions. Design and analyze A/B tests and multivariate experiments for UI/UX, layout, contextualization, algorithms, and APIs. Mine clickstream and transactional data to derive insights on user behavior and drive GFG product metrics. Own instrumentation for feature releases within assigned tech families in GFG. Generate segmented customer and merchant insights to refine product iterations and improvements. Deliver reliable, on-time outputs and build scalable, automated self-serve solutions for stakeholders. Qualifications What Essential Skills You Will Need Bachelor's/Master's in Statistics, Analytics, Economics, Mathematics, Engineering, or related fields. 8 years of experience in Analytics, BI, or Data Science, preferably in Internet/E-Commerce with large, high-velocity data. Strong SQL experience querying large relational databases. Translate data insights into relevant recommendations for non-technical and senior team members. Proficiency in Python, Databricks, Tableau/Power BI, and expertise in A/B testing, hypothesis testing, and DoE principles. Additional Information Life at Grab We care about your well-being at Grab, here are some of the global benefits we offer: We have your back with Term Life Insurance and comprehensive Medical Insurance. With GrabFlex, create a benefits package that suits your needs and aspirations. Celebrate moments that matter in life with loved ones through Parental and Birthday leave, and give back to your communities through Love-all-Serve-all (LASA) volunteering leave We have a confidential Grabber Assistance Programme to guide and uplift you and your loved ones through life's challenges. Balancing personal commitments and life's demands are made easier with our FlexWork arrangements such as differentiated hours What We Stand For At Grab We are committed to building an inclusive and equitable workplace that enables diverse Grabbers to grow and perform at their best. As an equal opportunity employer, we consider all candidates fairly and equally regardless of nationality, ethnicity, religion, age, gender identity, sexual orientation, family commitments, physical and mental impairments or disabilities, and other attributes that make them unique.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

The Opportunity As a Data Engineer, you will be part of Operation Center, India (INOPC-PG), aiming to develop a global value chain, where key business activities, resources, and expertise are shared across geographic boundaries to optimize value for Hitachi Energy customers across markets. As part of Transformers BU, we provide high-quality engineering and Technology to Hitachi Energy world. This is an important step from Hitachi Energy's Global Footprint strategy. How You’ll Make An Impact Display technical expertise in data analytics focusing on a team of diversified technical competencies. Build and maintain accurate and scalable data pipeline and infrastructure such as SQL Warehouse, Data Lakes, etc. using Cloud platforms (e.g.: MS Azure, Databricks). Proactively work with business stakeholders to understand data lineage, definitions, and methods of data extraction. Write production-grade SQL and PySpark code to create data architecture. Consolidate SQL databases from multiple sources, data cleaning, and manipulation in preparation for analytics and machine learning. Use data visualization tools such as Power BI to create professional quality dashboards and reports. Write good quality documentation for data processing for different projects to ensure reproducibility. Responsible to ensure compliance with applicable external and internal regulations, procedures, and guidelines. Living Hitachi Energy’s core values safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Your Background BE / B.Tech in Computer Science, Data Science, or related discipline and at least 5 years of related working experience. 5 years of data engineering experience, with understanding lake house architecture, data integration framework, ETL/ELT pipeline, orchestration/monitoring, star schema data modeling. 5 years of experience with Python/PySpark and SQL.( Proficient in PySpark, Python, and Spark SQL). 2-3 years of hands-on data engineering experience using Databricks as the main tool (meaning >60% of their time is using Databricks instead of just occasionally). 2-3 years of hands-on experience with different Databricks components (DLT, workflow, Unity catalog, SQL warehouse, CI/CD) in addition to using notebooks. Experience in Microsoft Power BI. Proficiency in both spoken & written English language is required. Hitachi Energy is a global technology leader that is advancing a sustainable energy future for all. We serve customers in the utility, industry and infrastructure sectors with innovative solutions and services across the value chain. Together with customers and partners, we pioneer technologies and enable the digital transformation required to accelerate the energy transition towards a carbon-neutral future. We employ around 45,000 people in 90 countries who each day work with purpose and use their different backgrounds to challenge the status quo. We welcome you to apply today and be part of a global team that appreciates a simple truth: Diversity + Collaboration = Great Innovation.

Posted 6 days ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: 1. Must have a minimum of 3 years of experience in data modelling and data visualization with Microsoft Power BI. 2. Can develop and design power BI dashboards and publish them to Power BI service. Good knowledge on data gateways. 3. Must have a strong background in writing DAX,SQL queries and Python 4. Can work independently to perform the data analysis and build visualization. 5. Good to have exposure on Azure Data Factory, Power Automate, Databricks 6. Good communication skills and a team player. 7. Should have PL-300 / DA-100 certification cleared Mandatory skill sets: Power BI Developer Preferred skill sets: Azure Data Factory, Power Automate, Databricks Years of experience required: 2-4 Years Education qualification: B.E./B.Tech/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Azure Data Factory, Microsoft Power Automate, Power BI Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Debugging, Emotional Regulation {+ 41 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 6 days ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: 1. Must have a minimum of 3 years of experience in data modelling and data visualization with Microsoft Power BI. 2. Can develop and design power BI dashboards and publish them to Power BI service. Good knowledge on data gateways. 3. Must have a strong background in writing DAX,SQL queries and Python 4. Can work independently to perform the data analysis and build visualization. 5. Good to have exposure on Azure Data Factory, Power Automate, Databricks 6. Good communication skills and a team player. 7. Should have PL-300 / DA-100 certification cleared Mandatory skill sets: Power BI Developer Preferred skill sets: Azure Data Factory, Power Automate, Databricks Years of experience required: 2-4 Years Education qualification: B.E./B.Tech/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Azure Data Factory, Microsoft Power Automate, Power BI Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Debugging, Emotional Regulation {+ 41 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 6 days ago

Apply

3.0 - 5.0 years

2 - 7 Lacs

Hyderābād

On-site

Sr. Data Scientist (Predictive Analytics Focus & Databricks) – Senior Programmer Analyst Responsibilities: Design and deploy predictive models (e.g., forecasting, churn analysis, fraud detection) using Python/SQL, Spark MLlib, and Databricks ML. Build end-to-end ML pipelines (data ingestion feature engineering model training deployment) on Databricks Lakehouse. Optimize model performance via hyperparameter tuning, AutoML, and MLflow tracking. Collaborate with engineering teams to operationalize models (batch/real-time) using Databricks Jobs or REST APIs. Implement Delta Lake for scalable, ACID-compliant data workflows. Enable CI/CD for ML pipelines using Databricks Repos and GitHub Actions. Troubleshoot issues in Spark Jobs and Databricks Environment. Requirements: Experience should have 3 to 5 years in predictive analytics, with expertise in regression, classification, time-series modeling. Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark. Familiarity with MLflow, Feature Store, and Unity Catalog for governance. Industry experience in Life Insurance or P&C. Good to have certification on Databricks Certified ML Practitioner. Technical Skills: Python, PySpark, MLflow, Databricks AutoML. Predictive Modelling (Classification, Clustering, Regression, timeseries and NLP). Cloud platform (Azure/AWS), Delta Lake, Unity Catalog.

Posted 6 days ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Position: Capability Lead – Databricks (Director/ Enterprise Architect) Location: Chennai, Hyderabad, Bangalore or Noida (Hybrid)- No remote option available Duration: Full Time Reporting : Practice Head Budget: 30- 65 LPA (Depending on level of expertise) Notice Period: Immediate Joiner/ Currently Serving/ Notice is less than 60 days Level of experience: 12+ Years Shift Timings: 2 pm -11 pm IST. Overlap with EST time zone of 2 pm Job Summary As part of the leadership team for Data business, the role will be responsible for building and growing the Databricks capability within the organization. The role entails driving technical strategy, innovation, and solution delivery on the Databricks Unified Data Analytics platform. This leader will work closely with clients, delivery teams, technology partners, and internal stakeholders to define and deliver scalable, high-performance solutions using Databricks. Areas of Responsibility 1. Offering and Capability Development Design and enhance Databricks-based solutions, accelerators, and reusable frameworks Define architectural patterns and best practices for Lakehouse implementations Collaborate with Databricks alliance teams to grow partnership and co-sell opportunities 2. Technical Leadership Provide architectural oversight for Databricks engagements including Delta Lake, Unity Catalog, MLflow, and Structured Streaming Lead solution architecture and design in proposals, RFPs, and strategic pursuits Establish technical governance and conduct reviews to ensure standards compliance Act as a technical escalation point for complex use cases in big data and machine learning 3. Delivery Oversight Guide delivery teams through implementation best practices, optimization, and troubleshooting Ensure high-quality execution across Databricks programs with a focus on performance, scalability, and governance Drive consistent use of CI/CD, automation, and monitoring for Databricks workloads 4. Talent Development Build a specialized Databricks talent pool through recruitment, training, and mentoring Define certification and career paths aligned with Databricks and related ecosystem tools Lead internal community of practice sessions and promote knowledge sharing 5. Business Development Support Partner with sales and pre-sales to position Databricks-based analytics and AI/ML solutions Identify new use cases and business opportunities using the Databricks platform Participate in client discussions, workshops, and architecture review boards 6. Thought Leadership and Innovation Develop whitepapers, blogs, and PoVs showcasing advanced analytics and AI/ML use cases on Databricks Stay abreast of Databricks roadmap, product features, and industry developments Drive innovative solutioning using Databricks with modern data stack components Job Requirements 12–15 years of experience in Data Engineering, Analytics, or AI/ML, with 3–5 years of focused experience on Databricks Proficiency in architectural best practices in cloud around user management, data privacy, data security, performance and other non-functional requirements Proven experience delivering large-scale data and AI/ML workloads on Databricks Deep knowledge of Spark, Delta Lake, Python/Scala, SQL, and data pipeline orchestration Experience with MLOps, Feature Store, Unity Catalog, and model lifecycle management Certification preferred (e.g., Lakehouse Fundamentals, Data Engineer Associate/ Professional) Experience integrating Databricks with cloud platforms (Azure, AWS, or GCP) and BI tools Strong background in solution architecture, presales, and client advisory Excellent communication, stakeholder engagement, and leadership skills Exposure to data governance, security, compliance, and cost optimization in cloud analytics About Mastech InfoTrellis Mastech InfoTrellis is the Data and Analytics unit of Mastech Digital. At Mastech InfoTrellis, we have built intelligent Data Modernization practices and solutions to help companies harness the true potential of their data. Our expertise lies in providing timely insights from your data to make better decisions…FASTER. With our proven strategies and cutting-edge technologies, we foster intelligent decision-making, increase operational efficiency, and impact substantial business growth. With an unwavering commitment to building a better future, we are driven by the purpose of transforming businesses through data-powered innovation. (Who We Are | Mastech InfoTrellis) Mastech Digital is an Equal Opportunity Employer - All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status.

Posted 6 days ago

Apply

5.0 years

8 - 10 Lacs

Hyderābād

On-site

Job title : Senior Analyst Hiring Manager : Team Lead Commercial Analytics Location : Hyderabad % of travel expected : Travel required as per business need, if any Job type : Permanent and Full time About the job Our Team: Sanofi Global Hub (SGH) is an internal Sanofi resource organization based in India and is setup to centralize processes and activities to support Specialty Care, Vaccines, General Medicines, CHC, CMO, and R&D, Data & Digital functions. SGH strives to be a strategic and functional partner for tactical deliveries to Medical, HEVA, and Commercial organizations in Sanofi, Globally. Main responsibilities: The overall purpose and main responsibilities are listed below: At our Sanofi we are leveraging analytics and technology, on behalf of patients around the world. We are seeking those who have a passion for using data, analytics, and insights to drive decision making that will allow us to tackle some of the world’s greatest health threats. Within our commercial Insights, Analytics, and Data organization we are transforming to better power decision-making across our end-to-end commercialization process, from business development to late lifecycle management. Deliverables support planning and decision making across multiple functional areas such as finance, manufacturing, product development and commercial. In addition to ensuring high-quality deliverables, our team drives synergies across the franchise, fosters innovation and best practices, and creates solutions to bring speed, scale and shareability to our planning processes. As we endeavour, we are seeking a dynamic talent for the role of “ Senior Analyst ” We are looking for a team member to support our analytics team based out of US. Robust analytics is a priority for our businesses, as the product potential has major implications to a wide range of disciplines. It is essential to have someone who understands and aspires to implement innovative analytics techniques to drive our insights generation. People: Maintain effective relationship with the end stakeholders within the allocated GBU and tasks – with an end objective to develop report and analysis as per requirement Collaborate with global stakeholders for project planning and setting up the timelines and maintaining budget Performance indicators: Feedback from (end stakeholders) on overall satisfaction Performance: Ability to translate business question to analytical requirement and work on it to develop reports/decks with minimum supervision. Experience working on patient analytics report and dataset such as LAAD and data from Speciality distributor,Speciality Pharma, and patient hub Will assist in managing business rules, definition and KPIs for reporting and insight He/she will ensure on time and accurate delivery of all analytics and dashboard requirement by collaborating with relevant stakeholders He/she will ensure dashboards and metrics are maintained as per requirements Responsible for access management of all trackers (Smartsheet, Excel, other Software) and Dashboard Ensuring data consistency across all dashboards and analytics requirements Pro-actively identifying analytical requirements Building advance tools, automatization and/or improvement processes for analytical and other needs Collaborates with Digital to enhance data access across various sources, develop tools and process to constantly improve quality and productivity. Performance indicators: Adherence to timeline, quality target Process: Support delivery of projects in terms of resourcing, coordination, quality, timeliness, efficiency, and high technical standards for deliveries made by the medical writing group, including scientific documents and clinical/medical reports Contribute to overall quality enhancement by ensuring high scientific standards for the output produced by the medical writing group; and Secure adherence to compliance procedures and internal/operational risk controls in accordance with all applicable standards Use latest tools/technologies/methodologies and partner with internal teams to continuously improve data quality and availability by building business processes that support global standardization Ability to work cross-functionally, gather requirements, analyse data, and generate insights and reports that can be used by the GBU Performance indicators: Feedback from stakeholders on satisfaction with deliverables Stakeholder: Work closely with global teams and/ external vendors to ensure the end-to-end effective project delivery of the designated publication/medical education deliverables Work collaboratively with the stakeholder teams to prioritize work and deliver on time-sensitive requests Performance indicators: Feedback from stakeholders on satisfaction with deliverables About you Experience: 5+ years relevant work experience with solid understanding of principles, standards, and best practices of Dashboard development ,Reporting, Insight Generation and story telling . In-depth knowledge of Rare disease and common databases like IQVIA, APLD, LAAD, Speciality Pharma and Distributor, Claims data etc. Other highly relevant experiences include: HCP and account valuation, segmentation, field promotional activities KPIs Soft skills : Strong learning agility; Ability to manage ambiguous environments, and to adapt to changing needs of the business; Good interpersonal and communication skills; strong presentation skills a must; Team player who is curious, dynamic, result oriented and can work collaboratively, and proactively; Ability to think strategically in an ambiguous environment; Ability to operate effectively in an international matrix environment, with ability to work across time zones; Demonstrated leadership and management in driving innovation and automation leveraging advanced statistical and analytical techniques Technical skills : Expert in Relational database technologies and concepts Strong project management abilities; capable of prioritizing and handling multiple projects simultaneously Working experience of using analytical tools like PowerBI, SQL, Snowflake, Smartsheet, advanced excel (including VBA),PPT etc Experience of developing and managing dashboards and reports Excellent planning, design, project management and documentation skills Excellent management of customer expectations, listening, and multi-tasking skills. Ability to take initiative, follow through, and meet deadlines as necessary while maintaining the quality Proficiency of programming languages SQL, SAS mandatory and Python, R, VB good to have Strong exp erience using analytical platforms (e.g., Databricks, IICS, Snowflake) Exp erience with pharmaceutical data sources and CRM data systems (e.g. IQVIA, Symphony, Claims data, LAAD, Speciality Pharmacy and Distributor data) Exp erience of using analytical tools like Power BI / Qliksense, Tableau, Alteryx etc; Expert knowledge of Excel ,PowerPoint . P a plus. Exp erience of developing and managing dashboards and reports Project management abilities; capable of prioritizing and handling multiple projects simultaneously An aptitude for problem solving and strategic thinking Ability to synthesize complex information into clear and actionable insights Proven ability to work effectively across all levels of stakeholders and diverse functions Solid understanding of pharmaceutical development, manufacturing, supply chain and marketing functions Demonstrated leadership and management in driving innovation and automation leveraging advanced statistical and analytical techniques Education : Bachelor’s or Master’s degree in areas such as Information Science/Operations/Management/Statistics/Decision Sciences/Engineering/Life Sciences/ Business Analytics or related field (e.g., PhD / MBA / Masters); Languages : Excellent knowledge in English and strong communication skills – written and spoken Other Requirement: This role is a sole contributor focused on development, delivery and communication of insights Pursue Progress, discover Extraordinary Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people. At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com!

Posted 6 days ago

Apply

15.0 years

3 - 8 Lacs

Chennai

On-site

Company Description NielsenIQ is a consumer intelligence company that delivers the Full View™, the world’s most complete and clear understanding of consumer buying behavior that reveals new pathways to growth. Since 1923, NIQ has moved measurement forward for industries and economies across the globe. We are putting the brightest and most dedicated minds together to accelerate progress. Our diversity brings out the best in each other so we can leave a lasting legacy on the work that we do and the people that we do it with. NielsenIQ offers a range of products and services that leverage Machine Learning and Artificial Intelligence to provide insights into consumer behavior and market trends. This position opens the opportunity to apply the latest state of the art in AI/ML and data science to global and key strategic projects. Job Description At NielsenIQ Technology, we are evolving the Discover platform, a unified, global, open data cloud ecosystem. Organizations around the world rely on our data and insights to innovate and grow. As a Platform Architect, you will play a crucial role in defining the architecture of our platforms to realize the company’s business strategy and objectives. You will collaborate closely with colleagues in Architecture and Engineering to take architecture designs from concept to delivery. As you gain knowledge of our platform, the scope of your role will expand to include an end-to-end focus. Key Responsibilities: Architect new products using NIQ’s core platforms. Assess the viability of new technologies as alternatives to existing platform selections. Drive innovations through proof of concepts and support technology migration from planning to production. Produce high-level approaches for platform components to guide component architects. Create, maintain, and promote reference architectures for key areas of the platform. Collaborate with Product Managers, Engineering Managers, Tech Leads, and Site Reliability Engineers (SREs) to govern the architecture design. Create High-Level Architectures (HLAs) and Architecture Decision Records (ADRs) for new requirements. Maintain architecture designs and diagrams. Provide architecture reviews for new intakes. Qualifications 15+ years of experience, including a strong engineering background with 5+ years in architecture/design roles. Hands-on experience building scalable enterprise platforms. Proficiency with SQL. Experience with relational databases such as PostgreSQL, document-oriented databases such as MongoDB, and search engines such as Elasticsearch. Proficiency in Java, Python, and/or JavaScript. Familiarity with common frameworks like Spring, OpenAPI, PySpark, React, Angular, etc. Background in TypeScript and Node.js a plus. Bachelor's degree in computer science or a related field (required; master’s preferred). Strong knowledge in Azure and GCP public cloud providers desirable. Good knowledge of Azure Cloud technologies, including Azure Databricks, Azure Data Factory, and Azure cloud storage (ADLS/Azure Blob). Experience with Snowflake is a definite plus. Good knowledge of Google Cloud Platform (GCP) services, including BigQuery, Workflows, Kubernetes Engine and Cloud Storage. Good understanding of Containers/Kubernetes and CI/CD. Knowledge of BI tools and analytics features is a plus. Advanced knowledge of data structures, algorithms, and designing for performance, scalability, and availability. Experience in agile software development practices. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 6 days ago

Apply

10.0 years

27 Lacs

Chennai

Remote

Location: [Add Location or "Remote/Hybrid"] Experience: 10+ Years Employment Type: Full-Time Job Overview: We are seeking a highly experienced and technically strong Lead SQL Server DBA to manage and support 24x7 DBA operations across multiple environments, including on-premises and Azure cloud infrastructure . The ideal candidate will have a proven track record in cloud migration, performance tuning, database security, and leading DBA support teams. This is a critical role within a support engagement with scope across L1.5, L2, and L3 support levels . Key Responsibilities: Lead and manage a team of 10+ DBAs delivering 24x7 support for SQL Server, Oracle, and MySQL databases across multiple entities. Perform cloud migration activities and manage Azure-based database systems. Ensure high availability, disaster recovery, and backup/restore procedures are in place and optimized. Oversee structured and unstructured database systems, including Databricks and lakehouse architectures . Optimize database performance, security , and ensure compliance with cybersecurity regulations . Review and manage database code quality , enforce best practices, and support development teams. Manage and maintain DevOps database deployment pipelines , using tools like Redgate . Act as technical lead in support engagements, handling escalations and critical issues. Collaborate in onshore/offshore models and Agile environments. Required Skills & Qualifications: 10+ years of experience in SQL Server administration (2012 and above) . 5+ years of cloud migration and support experience, especially on Microsoft Azure . Strong hands-on experience with Databricks , lakehouse concepts , and data warehousing . In-depth knowledge of HA/DR architecture , security optimization, and compliance standards . Proven experience in performance tuning and use of monitoring/troubleshooting tools. Familiar with support scope across L1.5, L2, and L3 service tiers. Expertise in DevOps practices and experience with manual and automated deployments . Excellent written and verbal communication skills. Ability to lead, organize, and mentor a DBA team in a dynamic, fast-paced setting. Desirable Qualifications: Experience in the insurance domain . Familiarity with Active Directory and Windows Server environments . Microsoft Certified in relevant database/cloud technologies. Job Type: Full-time Pay: Up to ₹2,700,000.00 per year Benefits: Provident Fund Work Location: In person

Posted 6 days ago

Apply

10.0 years

42 - 49 Lacs

Bengaluru

On-site

We are seeking a Senior Manager / Cloud Infrastructure Architect with deep expertise in Azure and/or GCP, and a strategic understanding of multi-cloud environments. You will lead the design, implementation, and optimization of cloud platforms that power data-driven, AI/GenAI-enabled enterprises. You will drive engagements focused on platform modernization, infrastructure-as-code, DevSecOps, and security-first architectures — aligning with business goals across Fortune 500 and mid-size clients. Key Responsibilities: Cloud Strategy & Architecture Shape and lead enterprise-grade cloud transformation initiatives across Azure, GCP, and hybrid environments. Advise clients on cloud-native, multi-cloud, and hybrid architectures aligned to performance, scalability, cost, and compliance goals. Architect data platforms, Lakehouses, and AI-ready infrastructure leveraging cloud-native services. AI & Data Infrastructure Enablement Design and deploy scalable cloud platforms to support Generative AI, LLM workloads, and advanced analytics. Implement data mesh and lakehouse patterns on cloud using services like Azure Synapse, GCP BigQuery, Databricks, Vertex AI, etc. Required Skills & Experience: 10+ years in cloud, DevOps, or infrastructure roles, with at least 4+ years as a cloud architect or platform engineering leader. Deep knowledge of Azure or GCP services, architecture patterns, and platform ops; multi-cloud experience is a strong plus. Proven experience with Terraform, Terragrunt, CI/CD (Azure DevOps, GitHub Actions, Cloud Build), and Kubernetes. Exposure to AI/ML/GenAI infra needs (GPU setup, MLOps, hybrid clusters, etc.) Familiarity with data platform tools: Azure Synapse, Databricks, BigQuery, Delta Lake, etc. Hands-on with security tools like Vault, Key Vault, Secrets Manager, and governance via policies/IAM. Excellent communication and stakeholder management skills. Preferred Qualifications: Certifications: Azure Solutions Architect Expert, GCP Professional Cloud Architect, Terraform Associate Experience working in AI, Data Science, or Analytics-led organizations or consulting firms. Background in leading engagements in regulated industries (finance, healthcare, retail, etc.) Key Skill: Landing Zone patterns, Data - Data landing zone. Multi-cloud Databricks, Snowflake, Dataproc, Bigquery, Azure HDInsights, AWS Redshift, EMR AI - GenAI platforms, AI Foundry, AWS Bedrock, AWS Sagemaker, GCP Vertex AI. Security. Scalability, Cloud agnostic, Cost efficient, Multi-cloud architecture. Job Type: Full-time Pay: ₹4,200,000.00 - ₹4,900,000.00 per year Schedule: Day shift Work Location: In person

Posted 6 days ago

Apply

8.0 years

5 - 8 Lacs

Bengaluru

On-site

Job Title: Sr. Data Engineer About Us "Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position: Sr Consultant Location: Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band: M3/M4 (8 to 14 years) Role Description: Job Title: Senior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement "strategic" or "optimal" solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO? You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer: • A work culture focused on innovation and creating lasting value for our clients and employees • Ongoing learning opportunities to help you acquire new skills or deepen existing expertise • A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients • A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid

Posted 6 days ago

Apply

6.0 years

4 - 10 Lacs

Bengaluru

On-site

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience 6 Years experience in Data Engineer 2 to 3 Years relevant experience in ELK Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes Experience in data pipelines development and tooling, e.g., Glue, Databricks, Synapse, or Datapro Experience with both relational and NoSQL databases, PostgreSQL, DB2, MongoDB Excellent problem-solving, analytical, and critical thinking skills Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail Communication Skills: Must be able to communicate with both technical and non-technical colleagues, to derive technical requirements from business needs and problems Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization Cloud platform certification, e.g., AWS Certified Data Analytics – Specialty, Elastic Certified Engineer, Google Cloud Professional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate Understanding of social coding and Integrated Development Environments, e.g., GitHub and Visual Studio Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 6 days ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru

On-site

» Date: 14 Jul 2025 Location: Bengaluru, KA, IN, 560038 Company: Hero Motocorp Function Digital & Information Technologies Pay Band E4 to M2 Role PySpark, Glue or Databricks developer proficient in cloud technologies who can become a member of HMCL Digital Connected core tech team. The role demands amazing ETL coding skills with great analytical and problem solving thinking. As a core member of the tech team you would be involved in multiple product and platform developments which would strengthen the business presence of HMCL in connected space. This platform will empower and enable HMCL’s RnD and Data scientists to build ML models which either can help in building a revenue stream or a feature offering. We are on a mission of building a platform that can handle high throughput with rpm greater than 3 lakh (yes you read it right) where data streaming happens near real time by leveraging modern age technologies and tools. A purpose driven role for you You will help in the technical designs of the multiple products for connected systems, which includes designing architecture blueprints, High level diagrams and Low level diagrams. You will work with an embedded team in managing embedded systems development specifically on connectivity and data management at device and cloud side. The candidate needs to manage deliverables from multiple vendors and large engineering teams, this includes timely delivery, negotiations and preparing techno-commercials. Architecture: Architect end-to-end solutions for our connected platforms with expertise in the area of Edge Computing, IoT and Mobile Services. Apply your specialized knowhow to empower software applications with maximum reliability and performance. You will work closely with Product teams of Vehicle IoT and Charging and ensure the products are built with scale for domestic and international business Apart from the Engineering team, the candidate needs to manage stakeholders from different business functions including CXOs. The candidate will work closely with the Vehicle and charging integration team and define protocols that can scale and can cater to the future needs of the Global product team. A Day in the life A PySpark, Glue or Databricks developer proficient in cloud technologies who can become a member of HMCL Digital Connected core tech team. The role demands amazing ETL coding skills with great analytical and problem solving thinking. As a core member of the tech team you would be involved in multiple product and platform developments which would strengthen the business presence of HMCL in connected space. This platform will empower and enable HMCL’s RnD and Data scientists to build ML models which either can help in building a revenue stream or a feature offering. We are on a mission of building a platform that can handle high throughput with rpm greater than 3 lakh Academic Qualification & Experience Academic Qualification B. Tech/M.Tech Relevant Experience 5-10 years of relevant experience in a software engineering role and at least 3 years of experience in building scalable connected platforms. Technical Skills/Knowledge Technical Skills 5-10 years of hands-on ETL experience in Python, Spark, Databricks or PySpark or any other ETL tools and technologies. Good to have knowledge of Java and Scala and strong in PySpark. Good to have conceptual knowledge of batch and real-time data processing. Good to have Problem solving analytical skills and play an individual contributor role. Must have handled data processes At Scale and possess knowledge of Big Data framework. Must have done one large "data platform" implementation on-premises/ on-cloud - Any HyperScalers. Knowledge of CICD pipelines and concepts Knowledge of IoT and automotive domain. Good to have knowledge of SQL/NoSQL databases. Knowledge of Open-Source tools and technologies. Most importantly, must have design skills for setting up a large data platform. End to end understanding of SDLC - analyze, design, develop, test scalable reliable solutions. Good to have knowledge of streaming, stateful and stateless data processing. Behavioural Skills People Management, Conflict Management, drive for results and Passion at work What will it be like to work for Hero As the world’s largest manufacturer of motorcycles and scooters for the last 22 years , Hero is where you will get to work with the brightest innovators, passionate about being the best in what they do. You will become a part of India’s proudest legacy, a brand that is celebrated by 110 million Indians and is now taking over the world with its manufacturing superpower. If you are someone who dreams big and goes after their dreams with absolute conviction, Hero is your place to be. At Hero, we are building a cutting-edge future of mobility, pushing frontiers of innovation and aiming for the very best. Choose to be with the best, choose to be your best. About Hero Headquartered in New Delhi (India), Hero MotoCorp is the world's largest manufacturer of motorcycles and scooters for 22 consecutive years. We are at the forefront of developing modern, technologically superior and eco-friendly mobility solutions for its millions of customers around the world. Hero MotoCorp has rapidly transformed into a true multinational organization with a presence in 47 countries across Asia, Africa, Latin America and the Middle East. We have achieved the coveted milestone of 110 million cumulative production and sales since inception. Aligned with its Vision “Be the Future of Mobility” , Hero MotoCorp plans to achieve its next 100 million sales by 2030. We have a globally benchmarked manufacturing and Research & Development (R&D) ecosystem that is spread across global geographies. Its R&D facilities are located in India and Germany - the Centre of Innovation and Technology (CIT) at Jaipur, India, and the Tech Centre Germany (TCG), near Munich. Hero MotoCorp’s eight ‘green’ manufacturing facilities are spread across India (6), Colombia (1) and Bangladesh (1). Hero MotoCorp is the pre-eminent leader in the Indian two-wheeler market. It is the only motorcycle manufacturing company listed in Dow Jones Sustainability Index. In 2022, Hero MotoCorp launched a separate brand for emerging mobility solutions, including Electric Vehicles (EV) - VIDA , Powered by Hero. VIDA has commenced sales of VIDA V1 – its first EV – in India and plans to launch the product in global markets. We are one of the largest global corporate promoters of multiple sporting disciplines. Hero is globally associated with golf, football, field hockey, cricket and motorsports. Hero MotoSports Team Rally is one of India’s flag-bearers in global rally racing. The iconic golfer Tiger Woods is Hero MotoCorp’s Global Corporate Partner. Read more about us. Be with the best. Be your best. Catch-up on all our latest openings. Recruitment assessments – We at Hero are equal opportunity employer and committed to a policy of treating all its employees and job applications equally. Some of our roles use assessments to help us understand how suitable you are for the role you've applied to. If you are invited to take an assessment, this is great news. It means your application has progressed to an important stage of our recruitment process. Job Segment: Cloud, R&D Engineer, Embedded, Test Engineer, Software Engineer, Technology, Engineering »

Posted 6 days ago

Apply

3.0 years

0 Lacs

Greater Bengaluru Area

On-site

We are looking for a Data Scientist to help us gain useful insight out of raw data. Data Scientist responsibilities include working with the data science team, planning projects and building analytics models. You should have a strong problem-solving ability and a knack for statistical analysis. If you’re also able to align our data products with our business goals, we’d like to meet you. Your ultimate goal will be to help improve our products and business decisions by making the most out of our data. Minimum Qualification ● Bachelor/ Master degree in Computer Science, Operations Research, Econometrics, Statistics or related technical field ● 3+ years of experience solving analytical problems using quantitative approaches Technical Skills - Must have – ● Experience with Databricks and MLflow. ● Experience deploying machine learning models in production environments. ● Expertise in data structures, algorithms, core object-oriented programming (OOP) concepts and software engineering principles. ● Ability to develop and implement strategies for monitoring model performance, accuracy, and overall health. ● Proficiency in developing and maintaining RESTful APIs using Python frameworks (e.g., Flask, Django). ● Strong ability to write database queries (SQL and NoSQL). ● Excellent problem-solving and troubleshooting skills in Python. ● Familiarity with Python libraries such as Pyspark, Pandas, scikit-learn, SQLAlchemy, and Requests. Good to have – ● Expertise in Kafka Streaming and Batch processing ● Familiarity with version control systems (e.g., Git) and CI/CD practices ● Experience with Python multiprocessing and worker/queue systems. ● Understanding of event-driven or asynchronous programming, or at least a high-level understanding.

Posted 6 days ago

Apply

3.0 years

5 - 40 Lacs

Gurugram, Haryana, India

On-site

Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks/Snow pipe Good to Have-Snowpro Certification Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,pl/sql,snowflake,reporting,snowpipe,databricks,spark,azure data factory,projects,data warehouse,unix shell scripting,snowsql,troubleshooting,rdbms,sql,data management principles,data,query optimization,snow pipe,skills,azure datafactory,nosql databases,circleci,git,terraform,snowflake utilities,performance tuning,etl,azure,architect

Posted 6 days ago

Apply

3.0 years

5 - 40 Lacs

Chennai, Tamil Nadu, India

On-site

Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks/Snow pipe Good to Have-Snowpro Certification Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,pl/sql,snowflake,reporting,snowpipe,databricks,spark,azure data factory,projects,data warehouse,unix shell scripting,snowsql,troubleshooting,rdbms,sql,data management principles,data,query optimization,snow pipe,skills,azure datafactory,nosql databases,circleci,git,terraform,snowflake utilities,performance tuning,etl,azure,architect

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies