Jobs
Interviews

5704 Databricks Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Staff Technical Product Manager Are you excited about the opportunity to lead a team within an industry leader in Energy Technology? Are you passionate about improving capabilities, efficiency and performance? Join our Digital Technology Team! As a Staff Technical Product Manager, this position will operate in lock-step with product management to create a clear strategic direction for build needs, and convey that vision to the service’s scrum team. You will direct the team with a clear and descriptive set of requirements captured as stories and partner with the team to determine what can be delivered through balancing the need for new features, defects, and technical debt.. Partner with the best As a Staff Technical Product Manager, we are seeking a strong background in business analysis, team leadership, and data architecture and hands-on development skills. The ideal candidate will excel in creating roadmap, planning with prioritization, resource allocation, key item delivery, and seamless integration of perspectives from various stakeholders, including Product Managers, Technical Anchors, Service Owners, and Developers. Results - oriented leader, and capable of building and executing an aligned strategy leading data team and cross functional teams to meet deliverable timelines. As a Staff Technical Product Manager, you will be responsible for: Demonstrating wide and deep knowledge in data engineering, data architecture, and data science. Ability to guide, lead, and work with the team to drive to the right solution Engaging frequently (80%) with the development team; facilitate discussions, provide clarification, story acceptance and refinement, testing and validation; contribute to design activities and decisions; familiar with waterfall, Agile scrum framework; Owning and manage the backlog; continuously order and prioritize to ensure that 1-2 sprints/iterations of backlog are always ready. Collaborating with UX in design decisions, demonstrating deep understanding of technology stack and impact on final product. Conducting customer and stakeholder interviews and elaborate on personas. Demonstrating expert-level skill in problem decomposition and ability to navigate through ambiguity. Partnering with the Service Owner to ensure a healthy development process and clear tracking metric to form standard and trustworthy way of providing customer support Designing and implementing scalable and robust data pipelines to collect, process, and store data from various sources. Developing and maintaining data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation. Optimizing and tuning the performance of data systems to ensure efficient data processing and analysis. Collaborating with product managers and analysts to understand data requirements and implement solutions for data modeling and analysis. Identifying and resolving data quality issues, ensuring data accuracy, consistency, and completeness Implementing and maintaining data governance and security measures to protect sensitive data. Monitoring and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes. Fuel your passion: To be successful in this role you will require: Have a Bachelor's or higher degree in Computer Science, Information Systems, or a related field. Have minimum 6-10 years of proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems. Have Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle). Have Extensive knowledge working with SAP systems, Tcode, data pipelines in SAP, Databricks related technologies. Have Experience with building complex jobs for building SCD type mappings using ETL tools like PySpark, Talend, Informatica, etc. Have Experience with data visualization and reporting tools (e.g., Tableau, Power BI). Have Strong problem-solving and analytical skills, with the ability to handle complex data challenges. Have Excellent communication and collaboration skills to work effectively in a team environment. Have Experience in data modeling, data warehousing, and ETL principles. Have familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery). Have advanced knowledge of distributed computing and parallel processing. Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Certification in relevant technologies or data engineering disciplines. Having working knowledge in Databricks, Dremio, and SAP is highly preferred. Work in a way that works for you We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer the following flexible working patterns (where applicable): Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working with us Our people are at the heart of what we do at Baker Hughes. We know we are better when all our people are developed, engaged, and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent, and develop leaders at all levels to bring out the best in each other. Working for you Our inventions have revolutionized energy for over a century. But to keep going forward tomorrow, we know we must push the boundaries today. We prioritize rewarding those who embrace challenge with a package that reflects how much we value their input. Join us, and you can expect: Contemporary work-life balance policies and wellbeing activities. About Us With operations in over 120 countries, we provide better solutions for our customers and richer opportunities for our people. As a leading partner to the energy industry, we're committed to achieving net-zero carbon emissions by 2050 and we're always looking for the right people to help us get there. People who are as passionate as we are about making energy safer, cleaner, and more efficient. Join Us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will challenge and inspire you! About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward – making it safer, cleaner and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress? Join us and become part of a team of people who will challenge and inspire you! Let’s come together and take energy forward. Baker Hughes Company is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. R147951

Posted 1 week ago

Apply

6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Description Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It With Pride. Together with analytics team leaders you will support our business with excellent data models to uncover trends that can drive long-term business results. How You Will Contribute You will: Work in close partnership with the business leadership team to execute the analytics agenda Identify and incubate best-in-class external partners to drive delivery on strategic projects Develop custom models/algorithms to uncover signals/patterns and trends to drive long-term business performance Execute the business analytics program agenda using a methodical approach that conveys to stakeholders what business analytics will deliver What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: Using data analysis to make recommendations to senior leaders Technical experience in roles in best-in-class analytics practices Experience deploying new analytical approaches in a complex and highly matrixed organization Savvy in usage of the analytics techniques to create business impacts As part of the Global MSC (Mondelez Supply Chain) Data & Analytics team, you will support our business to uncover trends that can drive long-term business results. In this role, you will be a key technical leader in developing our cutting-edge Supply Chain Data Product ecosystem. You'll have the opportunity to design, build, and automate data ingestion, harmonization, and transformation processes, driving advanced analytics, reporting, and insights to optimize Supply Chain performance across the organization. You will play an instrumental part in engineering robust and scalable data solutions, acting as a hands-on expert for Supply Chain data, and contributing to how these data products are visualized and interacted with. What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: SAP Data Expertise: Deep hands-on experience in extracting, transforming, and modeling data from SAP ECC/S4HANA (modules like MM, SD, PP, QM, FI/CO) and SAP BW/HANA. Proven ability to understand SAP data structures and business processes within Supply Chain. Cloud Data Engineering (GCP Focused): Strong proficiency and hands-on experience in data warehousing solutions and data engineering services within the Google Cloud Platform (GCP) ecosystem (e.g., BigQuery, Dataflow, Dataproc, Cloud Composer, Pub/Sub). Data Pipeline Development: Design, build, and maintain robust and efficient ETL/ELT processes for data integration, ensuring data accuracy, integrity, and timeliness. BI & Analytics Enablement: Collaborate with data scientists, analysts, and business users to provide high-quality, reliable data for their analyses and models. Support the development of data consumption layers, including dashboards (e.g., Tableau, Power BI). Hands-on experience with Databricks (desirable): ideally deployed on GCP or with GCP integration for large-scale data processing, Spark-based transformations, and advanced analytics. System Monitoring & Optimization (desirable): Monitor data processing systems and pipelines to ensure efficiency, reliability, performance, and uptime; proactively identify and resolve bottlenecks. Industry Knowledge: Solid understanding of the consumer goods industry, particularly Supply Chain processes and relevant key performance indicators (KPIs). What extra ingredients you will bring: Excellent communication and collaboration skills to facilitate effective teamwork and Supply Chain stakeholders’ engagement. Ability to explain complex data concepts to both technical and non-technical individuals. Experience delegating work and assignments to team members and guiding them through technical issues and challenges. Ability to thrive in an entrepreneurial, fast-paced setting, managing complex data challenges with a solutions-oriented approach. Strong problem-solving skills and business acumen, particularly within the Supply Chain domain. Experience working in Agile development environments with a Product mindset is a plus. Education / Certifications: Bachelor's degree in Information Systems/Technology, Computer Science, Analytics, Engineering, or a related field. 6+ years of hands-on experience in data engineering, data warehousing, or a similar technical role, preferably in CPG or manufacturing with a strong focus on Supply Chain data. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description OneMagnify is a global performance marketing organization working at the intersection of brand marketing, technology, and analytics. The Company’s core offerings accelerate business, amplify real-time results, and help set their clients apart from their competitors. OneMagnify partners with clients to design, implement and manage marketing and brand strategies using analytical and predictive data models that provide valuable customer insights to drive higher levels of sales conversion. OneMagnify’s commitment to employee growth and development extends far beyond typical approaches. We take great pride in fostering an environment where each of our 700+ colleagues can thrive and achieve their personal best. OneMagnify has been recognized as a Top Workplace, Best Workplace and Cool Workplace in the United States for 10 consecutive years and recently was recognized as a Top Workplace in India . The Data Engineering team is a dynamic group that is dedicated to transforming raw data into actionable insights. As a Databricks Engineer, you will architect, build, and maintain our data infrastructure on the Databricks Lakehouse Platform. You will collaborate with data scientists, analysts, and engineers to deliver world-class data solutions that drive our business forw ard. About You: Eager to address complex technical challenges with a strong engineering app roach.Outstanding execution and care for delivering robust and efficient data solu tions.A deep understanding of cloud-based data technologies and standard methodologies for data engine ering.Ability to develop and implement scalable data models and data warehousing solutions using Datab ricks. What you ’ll do: Architect, develop, and deploy scalable and reliable data infrastructure and pipelines using Databricks an d Spark.Design and implement data models and data warehousing solutions with a focus on performance and scal ability.Optimize data processing frameworks and infrastructure for maximum efficiency and cost-effect iveness.Collaborate with data scientists and analysts to understand their data needs and engineer so lutions.Implement robust data quality frameworks, monitoring systems, and alerting mec hanisms.Design, build, and maintain efficient ETL/ELT pr ocesses.Integrate Databricks with various data sources, systems, a nd APIs.Contribute to the definition and implementation of data governance, security, and compliance p olicies.Stay current with the latest advancements in Databricks, cloud data engineering standard methodologies, and related techn ologies. What you ’ll need: Bachelor's degree in Computer Science, Engineering, or a related technical field (or equivalent practical ex perience).5+ years of experience in data engineering or a similar role with a strong emphasis on building and maintaining data infra structure.Deep understanding and practical experience with the Databricks Lakehouse Platform and its core engineerin g aspects.Expert-level proficiency in working with big data processing frameworks, particularly Apa che Spark.Strong hands-on experience with programming languages such as Python (PySpark) and /or Scala.Solid experience with SQL and data warehousing principles, including schema design and performan ce tuning.Proven experience with cloud platforms such as AWS, Azur e, or GCP.Comprehensive understanding of data modeling, ETL/ELT architecture, and data quality engineering p rinciples.Excellent problem-solving, analytical, and debuggi ng skills.Strong communication and collaboration skills, with the ability to explain technical concepts to both technical and non-technical audience s. Benefits We offer a comprehensive benefits package including Medical Insurance, PF, Gratuity, paid holiday s, and m ore. About us Whether it’s awareness, advocacy, engagement, or efficacy, we move brands forward with work that connects with audiences and delivers results. Through meaningful analytics, engaging communications and innovative technology solutions, we help clients tackle their most ambitious projects and overcome their bigge st challenges. We are an equal oppor tunity employer We believe that Innovative ideas and solutions start with unique perspectives. That’s why we’re committed to providing every employee a workplace that’s free of discrimination and intolerance. We’re proud to be an equal opportunity employer and actively search for like-minded people to join our team

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Role: Sr. Data Engineer Location : Chennai Experience: • 5 to 7 years of hands-on experience in data engineering, demonstrating consistent career progression and technical growth. • Proven ability to design, develop, and deploy highly scalable and efficient data solutions for complex business needs. • Extensive experience managing and optimizing complex data integrations across diverse systems, platforms, and cloud environments. Skills / Knowledge: Advanced proficiency in programming languages such as Python, SQL, and Shell Scripting, with the ability to implement optimized and scalable code solutions. Deep expertise in data platforms like Snowflake and Databricks, including extensive experience working with PySpark and distributed data frames to process large-scale datasets. Advanced knowledge of orchestration tools such as Azure Data Factory, Apache Airflow, and Databricks workflows, including the ability to design and manage complex, multi-step workflows. Significant hands-on experience with tools like DBT for data transformation and replication solutions such as Qlik for efficient data migration and synchronization. Strong understanding of big data systems and frameworks, with practical experience in building and optimizing solutions for high-volume and high-velocity data. Extensive experience with version control tools such as GitHub or Azure DevOps, including implementing CI/CD pipelines for data engineering workflows. Advanced knowledge of serverless computing, including designing and deploying scalable solutions using Azure Functions with Python. Proficiency in API development frameworks such as FastAPI, with the ability to create robust, efficient, and secure data-driven APIs. Comprehensive expertise in designing and implementing ETL/ELT processes with a focus on performance, scalability, and maintainability. Proven experience in data warehouse development, including hands-on expertise with dimensional modeling and schema optimization for analytics. Solid English language communication skills, both written and verbal, for effective collaboration across teams and stakeholders. #teceze

Posted 1 week ago

Apply

8.0 - 10.0 years

25 - 30 Lacs

Chennai

Work from Office

Role Purpose The purpose of the role is to create exceptional architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. Do 1.Develop architectural solutions for the new deals/ major change requests in existing deals Creates an enterprise-wide architecture that ensures systems are scalable, reliable, and manageable. Provide solutioning of RFPs received from clients and ensure overall design assurance Develop a direction to manage the portfolio of to-be-solutions including systems, shared infrastructure services, applications in order to better match business outcome objectives Analyse technology environment, enterprise specifics, client requirements to set a collaboration solution design framework/ architecture Provide technical leadership to the design, development and implementation of custom solutions through thoughtful use of modern technology Define and understand current state solutions and identify improvements, options & tradeoffs to define target state solutions Clearly articulate, document and sell architectural targets, recommendations and reusable patterns and accordingly propose investment roadmaps Evaluate and recommend solutions to integrate with overall technology ecosystem Works closely with various IT groups to transition tasks, ensure performance and manage issues through to resolution Perform detailed documentation (App view, multiple sections & views) of the architectural design and solution mentioning all the artefacts in detail Validate the solution/ prototype from technology, cost structure and customer differentiation point of view Identify problem areas and perform root cause analysis of architectural design and solutions and provide relevant solutions to the problem Collaborating with sales, program/project, consulting teams to reconcile solutions to architecture Tracks industry and application trends and relates these to planning current and future IT needs Provides technical and strategic input during the project planning phase in the form of technical architectural designs and recommendation Collaborates with all relevant parties in order to review the objectives and constraints of solutions and determine conformance with the Enterprise Architecture Identifies implementation risks and potential impacts 2.Enable Delivery Teams by providing optimal delivery solutions/ frameworks Build and maintain relationships with executives, technical leaders, product owners, peer architects and other stakeholders to become a trusted advisor Develops and establishes relevant technical, business process and overall support metrics (KPI/SLA) to drive results Manages multiple projects and accurately reports the status of all major assignments while adhering to all project management standards Identify technical, process, structural risks and prepare a risk mitigation plan for all the projects Ensure quality assurance of all the architecture or design decisions and provides technical mitigation support to the delivery teams Recommend tools for reuse, automation for improved productivity and reduced cycle times Leads the development and maintenance of enterprise framework and related artefacts Develops trust and builds effective working relationships through respectful, collaborative engagement across individual product teams Ensures architecture principles and standards are consistently applied to all the projects Ensure optimal Client Engagement Support pre-sales team while presenting the entire solution design and its principles to the client Negotiate, manage and coordinate with the client teams to ensure all requirements are met and create an impact of solution proposed Demonstrate thought leadership with strong technical capability in front of the client to win the confidence and act as a trusted advisor 3.Competency Building and Branding Ensure completion of necessary trainings and certifications Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas based on market and customer research Develop and present a point of view of Wipro on solution design and architect by writing white papers, blogs etc. Attain market referencability and recognition through highest analyst rankings, client testimonials and partner credits Be the voice of Wipros Thought Leadership by speaking in forums (internal and external) Mentor developers, designers and Junior architects in the project for their further career development and enhancement Contribute to the architecture practice by conducting selection interviews etc 4.Team Management Resourcing Anticipating new talent requirements as per the market/ industry trends or client requirements Hire adequate and right resources for the team Talent Management Ensure adequate onboarding and training for the team members to enhance capability & effectiveness Build an internal talent pool and ensure their career progression within the organization Manage team attrition Drive diversity in leadership positions Performance Management Set goals for the team, conduct timely performance reviews and provide constructive feedback to own direct reports Ensure that the Performance Nxt is followed for the entire team Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Mandatory Skills: DataBricks - Data Engineering. Experience: 8-10 Years.

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

On-site

Exp:-7+ years Location:-Trivandrum, Kochi, Bangalore No of Positions:-20 Notice:-0-15 days ONLY Mandatory Skill Pyspark, SQL, Azure Databricks, Python JD Strong experience in Azure, Databricks, SQL, Python, and PySpark • Proficiency in a data scripting language • Experience with Relational and Big Data engines • Solid understanding of data pipeline design (e.g., Kimball/Star Schema) • Strong Data Warehousing and data modeling skills • Hands-on ETL/ELT process experience • Good communication and documentation practices • Familiarity with Agile ways of working • Use of a MacBook Pro (mandatory for compatibility with our scripts and pipelines)

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

AI/ML Manager: Location – Pune Experience: 8+ years Notice period – Immediate to 30 days. Key Responsibilities: Lead the development of machine learning PoCs and demos using structured/tabular data for use cases such as forecasting, risk scoring, churn prediction, and optimization. Collaborate with sales engineering teams to understand client needs and present ML solutions during pre-sales calls and technical workshops. Build ML workflows using tools such as SageMaker, Azure ML, or Databricks ML and manage training, tuning, evaluation, and model packaging. Apply supervised, unsupervised, and semi-supervised techniques such as XGBoost, CatBoost, k-Means, PCA, time-series models, and more. Work with data engineering teams to define data ingestion, preprocessing, and feature engineering pipelines using Python, Spark, and cloud-native tools. Package and document ML assets so they can be scaled or transitioned into delivery teams post-demo. Stay current with best practices in ML explainability, model performance monitoring, and MLOps practices. Participate in internal knowledge sharing, tooling evaluation, and continuous improvement of lab processes. Qualifications: 8+ years of experience developing and deploying classical machine learning models in production or PoC environments. Strong hands-on experience with Python, pandas, scikit-learn, and ML libraries such as XGBoost, CatBoost, LightGBM, etc. Familiarity with cloud-based ML environments such as AWS SageMaker, Azure ML, or Databricks. Solid understanding of feature engineering, model tuning, cross-validation, and error analysis. Experience with unsupervised learning, clustering, anomaly detection, and dimensionality reduction techniques. Comfortable presenting models and insights to technical and non-technical stakeholders during pre-sales engagements. Working knowledge of MLOps concepts, including model versioning, deployment automation, and drift detection. Interested candidates shall apply or share resumes at kanika.garg@austere.co.in.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Note:- This job position is for both Pune and Hyderabad location. Job Summary: We are seeking a highly skilled Visualization Expert with deep expertise in Tableau and practical experience in Databricks integration to join our data team. The ideal candidate will play a pivotal role in transforming complex data into actionable insights through engaging dashboards and reports, while ensuring seamless data pipelines and optimized connections between Databricks and Tableau. Key Responsibilities: Design, develop, and maintain advanced Tableau dashboards and visualizations to support business decision-making. Integrate Tableau with Databricks to access and visualize data efficiently from Delta Lake, Spark, and other sources. Collaborate with data engineers, analysts, and stakeholders to understand business needs and translate them into visualization solutions. Optimize Tableau extracts and live connections for performance and scalability. Develop and document best practices for visualization design, data governance, and dashboard deployment. Ensure data accuracy, reliability, and security in visualizations. Stay current with Tableau and Databricks features, and implement new capabilities where beneficial. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or related field. 5+ years of experience building Tableau dashboards and visualizations. 1+ years of hands-on experience integrating Tableau with Databricks (including SQL, Delta Lake, and Spark environments). Strong understanding of data modeling, ETL processes, and analytics workflows. Proficient in writing optimized SQL queries. Experience with Tableau Server or Tableau Cloud deployment and administration. Ability to work with large datasets and troubleshoot performance issues. Preferred Qualifications: Experience with scripting or automation tools (e.g., Python, DBT). Familiarity with other BI tools and cloud platforms (e.g., Power BI, AWS, Azure). Tableau certification (Desktop Specialist/Professional or Server). Understanding of data privacy and compliance standards (e.g., GDPR, HIPAA). Soft Skills: Strong analytical and problem-solving skills. Excellent communication and stakeholder management abilities. Detail-oriented with a strong focus on data accuracy and user experience. Comfortable working independently and collaboratively in a fast-paced environment.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or related field. 5+ years of experience building Tableau dashboards and visualizations. 1+ years of hands-on experience integrating Tableau with Databricks (including SQL, Delta Lake, and Spark environments). Strong understanding of data modeling, ETL processes, and analytics workflows. Proficient in writing optimized SQL queries. Experience with Tableau Server or Tableau Cloud deployment and administration. Ability to work with large datasets and troubleshoot performance issues. Preferred Qualifications: Experience with scripting or automation tools (e.g., Python, DBT). Familiarity with other BI tools and cloud platforms (e.g., Power BI, AWS, Azure). Tableau certification (Desktop Specialist/Professional or Server). Understanding of data privacy and compliance standards (e.g., GDPR, HIPAA). Soft Skills: Strong analytical and problem-solving skills. Excellent communication and stakeholder management abilities. Detail-oriented with a strong focus on data accuracy and user experience. Comfortable working independently and collaboratively in a fast-paced environment.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Overview: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. Implement and enforce data quality checks, validation rules, and monitoring. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: 4–6 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture, data integration, and transformation techniques. Experience in working with version control systems like GitHub and knowledge of CI/CD practices. Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): Experience working with the Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools.

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

On-site

We are seeking a talented and experienced Data Engineer with a strong background in Microsoft Fabric to design, develop, and maintain robust, scalable, and secure data solutions. You'll play a crucial role in building and optimizing data pipelines, data warehouses, and data lakehouses within the Microsoft Fabric ecosystem to enable advanced analytics and business intelligence. Key Responsibilities Design and Development: Architect, design, develop, and implement end-to-end data solutions within the Microsoft Fabric ecosystem, including Lakehouse, Data Warehouse, and Real-Time Analytics components. Data Pipeline Construction: Build, test, and maintain robust and scalable data pipelines for data ingestion, transformation, and curation from diverse sources using Microsoft Fabric Data Factory (Pipelines and Dataflows Gen2) and Azure Databricks (PySpark/Scala) . Data Modeling & Optimization: Develop and optimize data models within Microsoft Fabric, adhering to best practices for performance, scalability, and data integrity (e.g., dimensional modeling). ETL/ELT Processes: Implement efficient ETL/ELT processes to extract data from various sources, transform it into suitable formats, and load it into the data lakehouse or analytical systems. Performance Tuning: Continuously monitor and fine-tune data pipelines and processing workflows to enhance overall performance and efficiency, especially for large-scale datasets. Data Quality & Governance: Design and implement data quality, validation, and reconciliation processes to ensure data accuracy and reliability. Ensure data security and compliance with data privacy regulations. Collaboration: Work closely with data architects, data scientists, business intelligence developers, and business stakeholders to understand data requirements and translate them into technical solutions. Automation & CI/CD: Implement CI/CD pipelines for data solutions within Azure DevOps or similar tools, ensuring automated deployment and version control. Troubleshooting: Troubleshoot and resolve complex data-related issues and performance bottlenecks. Documentation: Maintain comprehensive documentation for data architectures, pipelines, data models, and processes. Stay Updated: Keep abreast of the latest advancements in Microsoft Fabric, Azure data services, and data engineering best practices. Required Skills & Qualifications Bachelor's degree in Computer Science, Information Technology, or a related quantitative field, or equivalent practical experience. 6+ years of hands-on experience as a Data Engineer or Data Architect. Mandatory hands-on experience with Microsoft Fabric , including its core components such as Lakehouse, Data Warehouse, and Data Factory (Pipelines, Dataflows Gen2), and Spark notebooks. Strong expertise in Microsoft Azure data services, including: Azure Databricks (PySpark/Scala for complex data processing and transformations). Azure Data Lake Storage Gen2 (for scalable data storage). Azure Data Factory (for ETL/ELT orchestration). Proficiency in SQL for data manipulation and querying. Experience with Python or Scala for data engineering tasks. Solid understanding of data warehousing concepts, data modeling (dimensional, relational), and data lakehouse architectures. Experience with version control systems (e.g., Git, Azure Repos). Strong analytical and problem-solving skills with a keen eye for detail. Excellent communication (written and verbal) and interpersonal skills to collaborate effectively with cross-functional teams.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

🚀 We're Hiring! | Data Engineer—Pune (Hybrid) 📍 Location: Pune 🖥️ Mode: Work From Office (Hybrid) 🔍 Position: Data Engineer Are you passionate about building smart, automated testing solutions? Join our growing team! Job Description · Bachelor's or master's degree in computer science, IT, or equivalent and a minimum of 4 to 8 years of experience building and deploying complex data pipelines and data solutions. · Bachelor's or master's degree in computer science, IT, or equivalent (for junior profiles). · Experience deploying data pipelines using technologies like Databricks. · Hands on experience with Java. · Hands-on experience in Databricks. · Experience with visualization software, preferably Splunk (or else Grafana, Prometheus, PowerBI, Tableau, or similar). · Strong experience with SQL , Java with hands-on experience in data modeling. · Experience with Pyspark or Spark to deal with distributed data. · Good to have knowledge on Splunk (SPL) · Experience with data schemas (e.g. JSON/XML/Avro). · Experience in deploying services as containers (e.g. Docker, Kubernetes). · Experience in working with cloud services (preferably with Azure). · Experience with streaming and/or batch storage (e.g. Kafka, streaming platform) is a plus. · Experience in data quality management and monitoring is a plus. · Strong communication skills in English. 📩 Interested? Let's connect! Send your updated CV to: nivetha.s@eminds.ai Join us and be part of something exciting!

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

At EXL, we go beyond capabilities to focus on collaboration and character, tailoring solutions to your unique needs, culture, goals, and technology environments. We specialize in transformation, data science, and change management to enhance efficiency, improve customer relationships, and drive revenue growth. Our expertise in analytics, digital interventions, and operations management helps you outperform the competition with sustainable models at scale. As your business evolution partner, we optimize data leverage for better business decisions and intelligence-driven operations. For more information, visit www.exlservice.com. Job Title - Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. Role Description We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics Responsibilities: 1. Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently. 2. Collaborate with analysts to understand data requirements and ensure data availability and quality. 3. Write and optimize SQL queries for data extraction, transformation, and loading. 4. Utilize Git for version control, ensuring proper documentation and tracking of code changes. 5. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval. 6. Develop and optimize high-performance, scalable databases using Amazon DynamoDB. 7. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations. 8. Automate workflows using AWS Cloud services like event bridge, step functions. 9. Monitor and optimize data processing workflows for performance and scalability. 10. Troubleshoot data-related issues and provide timely resolution. 11. Stay up-to-date with industry best practices and emerging technologies in data engineering. Qualifications: 1. Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus. 2. Strong proficiency in PySpark and Python for data processing and analysis. 3. Proficiency in SQL for data manipulation and querying. 4. Experience with version control systems, preferably Git. 5. Familiarity with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc. 6. Familiarity with Databricks and it’s concepts. 7. Excellent problem-solving skills and attention to detail. 8. Strong communication and collaboration skills to work effectively within a team. 9. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Preferred Skills: 1. Knowledge of data warehousing concepts and data modeling. 2. Familiarity with big data technologies like Hadoop and Spark. 3. AWS certifications related to data engineering.

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Hi, We have the job opening for Team Lead - Data Migration and Snowflake Company Name : PibyThree Consulting Pvt Ltd. Job Title : Team Lead - Data Migration and Snowflake Skill : Azure Data factory, Databricks, PySpark, Snowflake & Data Migration Location : Pune, Maharashtra. About Us: Πby3 is A Cloud Transformation company enabling Enterprises for Future. We are nimble, and Highly dynamic focused team with a passion to serve our clients with utmost trust and ownership. Our expertise in Technology with vast experience over the years helps client get Solutions with optimized cost and reduced risks. Job Description: We are looking for an experienced Team Lead – Data Warehouse Migration, Data Engineering & BI to lead enterprise-level data transformation initiatives. The ideal candidate will have deep expertise in migration, Snowflake, Power BI and end-to-end data engineering using tools like Azure Data Factory, Databricks, and PySpark. Key Responsibilities: Lead and manage data warehouse migration projects, including extraction, transformation, and loading (ETL/ELT) across legacy and modern platforms. Architect and implement scalable Snowflake data warehousing solutions for analytics and reporting. Develop and schedule robust data pipelines using Azure Data Factory and Databricks. Write efficient and maintainable PySpark code for batch and real-time data processing. Design and develop dashboards and reports using Power BI to support business insights. Ensure data accuracy, security, and consistency throughout the project lifecycle. Collaborate with stakeholders to understand data and reporting requirements. Mentor and lead a team of data engineers and BI developers. Manage project timelines, deliverables, and team performance effectively Must-Have Skills: Data Migration: Hands-on experience with large-scale data migration, reconciliation, and transformation. Snowflake: Data modeling, performance tuning, ELT/ETL development, role-based access control. Azure Data Factory: Pipeline development, integration services, linked services. Databricks: Spark SQL, notebooks, cluster management, orchestration. PySpark: Advanced transformations, error handling, and optimization techniques. Power BI: Data visualization, DAX, Power Query, dashboard/report publishing and maintenance. Preferred Skills: Familiarity with Agile methodologies and sprint-based development. Experience in working with CI/CD for data workflows. Ability to lead client discussions and manage stakeholder expectations. Strong analytical and problem-solving abilities. Regards, Arshee Khan Talent Acquisition Specialist Email -Arshee.khan@Piythree.com https://www.linkedin.com/in/arshee-khan-90311b2b5?utm_source=share&am...

Posted 1 week ago

Apply

7.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Overview We are seeking an ETL Developer with expertise in Advanced SQL, Python, and Shell Scripting. This full-time position reports to the Data Engineering Manager and is available in a hybrid work model. This is a replacement position within the SRAI - EYC Implementation team. Key Responsibilities Design and develop ETL processes for data extraction, transformation, and loading. Utilize Advanced SQL for data processing and analysis. Implement data processing solutions using Python and Shell Scripting. Collaborate with cross-functional teams to understand data requirements. Maintain and optimize data pipelines for performance and reliability. Provide insights and analysis to support business decisions. Ensure data quality and integrity throughout the ETL process. Stay updated on industry trends and best practices in data engineering. Must-Have Skills and Qualifications 7-8 years of experience as an ETL Developer. Expertise in Advanced SQL for data manipulation and analysis. Proficient in Python and Shell Scripting. Foundational understanding of Databricks and Power BI. Strong logical problem-solving skills. Experience in data processing and transformation. Understanding of the retail domain is a plus. Good-to-Have Skills and Qualifications Familiarity with cloud data platforms (AWS, Azure). Knowledge of data warehousing concepts. Experience with data visualization tools. Understanding of Agile methodologies. What We Offer Competitive salary and comprehensive benefits package. Opportunities for professional growth and advancement. Collaborative and innovative work environment. Flexible work arrangements. Impactful work that drives industry change.

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job description Job Name: Senior Data Engineer Azure Years of Experience: 5 Job Description: We are looking for a skilled and experienced Senior Azure Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: ADF, Databricks Secondary Skills: DBT, Python, Databricks, Airflow, Fivetran, Glue, Snowflake Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge /involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. Utilize Azure Databricks for data transformation and processing. Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. Proficient in programming languages like Python, SQL, and conversant with pertinent scripting languages

Posted 1 week ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities : Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 week ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 week ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role:-Business Analyst Exp:-7-14 Yrs Location:-Hyderabad Required Skills : Business Analyst- BRD/FRD, Stakeholder Mngt, UAT Testing, Datawarehouse Concepts, SQL joints and subqueries, Data Visualization tools-Power BI/MSTR and Investment Domain (Capital market, Asset management, wealth management). Please share your resumes to jyothsna.g@technogenindia.com, Experience: 10+ years of experience as a BSA or similar role in data analytics or technology projects. 5+ years of domain experience in asset management, investment management, insurance, or financial services. Familiarity with Investment Operations concepts such as Critical Data Elements (CDEs), data traps, and reconciliation workflows. Working knowledge of data engineering principles: ETL/ELT, data lakes, and data warehousing. Proficiency in BI and analytics tools such as Power BI, Tableau, MicroStrategy, and SQL. Excellent communication, analytical thinking, and stakeholder engagement skills. Experience working in Agile/Scrum environments with cross-functional delivery teams. Technical Skills: Proven track record of Analytical and Problem-Solving skills. In-depth knowledge of investment data platforms, including Golden Source, NeoXam, RIMES, JPM Fusion, etc. Expertise in cloud data technologies such as Snowflake, Databricks, and AWS/GCP/Azure data services. Strong understanding of data governance frameworks, metadata management, and data lineage. Familiarity with regulatory requirements and compliance standards in the investment management industry. Hands-on experience with IBOR’s such as Blackrock Alladin, CRD, Eagle STAR (ABOR), Eagle Pace, and Eagle DataMart. Familiarity with investment data platforms such as Golden Source, FINBOURNE, NeoXam, RIMES, and JPM Fusion. Experience with cloud data platforms like Snowflake and Databricks. Background in data governance, metadata management, and data lineage frameworks.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

🧭 Job Summary: We are seeking a results-driven Data Project Manager (PM) to lead data initiatives leveraging Databricks and Confluent Kafka in a regulated banking environment. The ideal candidate will have a strong background in data platforms, project governance, and financial services, and will be responsible for ensuring successful end-to-end delivery of complex data transformation initiatives aligned with business and regulatory requirements. Key Responsibilities: 🔹 Project Planning & Execution - Lead planning, execution, and delivery of enterprise data projects using Databricks and Confluent. - Develop detailed project plans, delivery roadmaps, and work breakdown structures. - Ensure resource allocation, budgeting, and adherence to timelines and quality standards. 🔹 Stakeholder & Team Management - Collaborate with data engineers, architects, business analysts, and platform teams to align on project goals. - Act as the primary liaison between business units, technology teams, and vendors. - Facilitate regular updates, steering committee meetings, and issue/risk escalations. 🔹 Technical Oversight - Oversee solution delivery on Databricks (for data processing, ML pipelines, analytics). - Manage real-time data streaming pipelines via Confluent Kafka. - Ensure alignment with data governance, security, and regulatory frameworks (e.g., GDPR, CBUAE, BCBS 239). 🔹 Risk & Compliance - Ensure all regulatory reporting data flows are compliant with local and international financial standards. - Manage controls and audit requirements in collaboration with Compliance and Risk teams. 💼 Required Skills & Experience: ✅ Must-Have: - 7+ years of experience in Project Management within the banking or financial services sector. - Proven experience leading data platform projects (especially Databricks and Confluent Kafka). - Strong understanding of data architecture, data pipelines, and streaming technologies. - Experience managing cross-functional teams (onshore/offshore). - Strong command of Agile/Scrum and Waterfall methodologies. ✅ Technical Exposure: - Databricks (Delta Lake, MLflow, Spark) - Confluent Kafka (Kafka Connect, kSQL, Schema Registry) - Azure or AWS Cloud Platforms (preferably Azure) - Integration tools (Informatica, Data Factory), CI/CD pipelines - Oracle ERP Implementation experience ✅ Preferred: - PMP / Prince2 / Scrum Master certification - Familiarity with regulatory frameworks: BCBS 239, GDPR, CBUAE regulations - Strong understanding of data governance principles (e.g., DAMA-DMBOK) 🎓 Education: Bachelor’s or Master’s in Computer Science, Information Systems, Engineering, or related field. 📈 KPIs: - On-time, on-budget delivery of data initiatives - Uptime and SLAs of data pipelines - User satisfaction and stakeholder feedback - Compliance with regulatory milestones

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

The Data Quality Stable Team is part of the Data Governance & Data Management function. There are responsible to: Ensure the Volvo Group get the capabilities and support to enable business appointed stakeholders to monitor and improve the quality of their data. Establish continuous data quality improvement process. Provide tools, trainings, standards in the area of Data Quality. What is expected from you? Drive workshops with the business analyst to understand their business context and to collect their needs around data quality pain points. Collaborate closely with Business Analysts and with business stakeholders to profile data, build Data Quality measurement, implement Data quality dashboards. Access, manipulate, query, and analyze data using different software, tools (IDMC, Databricks, SQL, etc) and techniques. Design, develop, and implement Data Quality pipelines leveraging Data Quality modules (Data Profiling, Data Quality, and Data Integration) in IDMC. Conduct quality assurance activities to validate the effectiveness of Data Quality pipelines and ensure compliance with established data quality standards. Perform thorough testing and validation of Data Quality processes, including unit testing, integration testing, performance testing and user acceptance testing. Document Data Quality processes, including design specifications, testing procedures, and operational guidelines to ensure clarity and maintainability. Collaborate with domain data stewards and business stakeholders to define data quality requirements and establish metrics for measuring data quality. Monitor and troubleshoot Data Quality pipelines, identifying and resolving issues to maintain optimal performance and reliability. Implement best practices for data quality management, including data cleansing, enrichment, and transformation techniques. Participate in continuous improvement initiatives to enhance data quality processes and tools. Provide training and support to team members and stakeholders on Data Quality tools and methodologies. Do you dream big? We do too, and we are excited to grow together. In this role, you will bring: University degree, with a passion for data. Hands-on experience with Informatica IDMC, specifically in Data Profiling, Data Quality and Data Integration modules. Minimum 4 years’ experience with Informatica DQ modules Strong understanding of Data Quality concepts, including data profiling, cleansing, and validation. Minimum 4/5 years on a DQ project Experience with ETL solutions Experience with quality assurance methodologies and testing frameworks. Basic understanding of MS Azure. Familiarity with Data Governance & Management principles. Experience in agile setups, using DevOps or similar. Strong analytical, problem-solving, and troubleshooting skills. Good ability to link business needs and developments Excellent communication and collaboration skills. Proficiency in English for international projects. Nice to have: Experience developing with Power BI Work from office- all 5 days

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra

On-site

Location Mumbai, Maharashtra, India Category Digital Technology Job ID: R147951 Posted: Jul 14th 2025 Job Available In 5 Locations Staff Technical Product Manager Are you excited about the opportunity to lead a team within an industry leader in Energy Technology? Are you passionate about improving capabilities, efficiency and performance? Join our Digital Technology Team! As a Staff Technical Product Manager, this position will operate in lock-step with product management to create a clear strategic direction for build needs, and convey that vision to the service’s scrum team. You will direct the team with a clear and descriptive set of requirements captured as stories and partner with the team to determine what can be delivered through balancing the need for new features, defects, and technical debt.. Partner with the best As a Staff Technical Product Manager, we are seeking a strong background in business analysis, team leadership, and data architecture and hands-on development skills. The ideal candidate will excel in creating roadmap, planning with prioritization, resource allocation, key item delivery, and seamless integration of perspectives from various stakeholders, including Product Managers, Technical Anchors, Service Owners, and Developers. Results - oriented leader, and capable of building and executing an aligned strategy leading data team and cross functional teams to meet deliverable timelines. As a Staff Technical Product Manager, you will be responsible for: Demonstrating wide and deep knowledge in data engineering, data architecture, and data science. Ability to guide, lead, and work with the team to drive to the right solution Engaging frequently (80%) with the development team; facilitate discussions, provide clarification, story acceptance and refinement, testing and validation; contribute to design activities and decisions; familiar with waterfall, Agile scrum framework; Owning and manage the backlog; continuously order and prioritize to ensure that 1-2 sprints/iterations of backlog are always ready. Collaborating with UX in design decisions, demonstrating deep understanding of technology stack and impact on final product. Conducting customer and stakeholder interviews and elaborate on personas. Demonstrating expert-level skill in problem decomposition and ability to navigate through ambiguity. Partnering with the Service Owner to ensure a healthy development process and clear tracking metric to form standard and trustworthy way of providing customer support Designing and implementing scalable and robust data pipelines to collect, process, and store data from various sources. Developing and maintaining data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation. Optimizing and tuning the performance of data systems to ensure efficient data processing and analysis. Collaborating with product managers and analysts to understand data requirements and implement solutions for data modeling and analysis. Identifying and resolving data quality issues, ensuring data accuracy, consistency, and completeness Implementing and maintaining data governance and security measures to protect sensitive data. Monitoring and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes. Fuel your passion: To be successful in this role you will require: Have a Bachelor's or higher degree in Computer Science, Information Systems, or a related field. Have minimum 6-10 years of proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems. Have Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle). Have Extensive knowledge working with SAP systems, Tcode, data pipelines in SAP, Databricks related technologies. Have Experience with building complex jobs for building SCD type mappings using ETL tools like PySpark, Talend, Informatica, etc. Have Experience with data visualization and reporting tools (e.g., Tableau, Power BI). Have Strong problem-solving and analytical skills, with the ability to handle complex data challenges. Have Excellent communication and collaboration skills to work effectively in a team environment. Have Experience in data modeling, data warehousing, and ETL principles. Have familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery). Have advanced knowledge of distributed computing and parallel processing. Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Certification in relevant technologies or data engineering disciplines. Having working knowledge in Databricks, Dremio, and SAP is highly preferred. Work in a way that works for you We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer the following flexible working patterns (where applicable): Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working with us Our people are at the heart of what we do at Baker Hughes. We know we are better when all our people are developed, engaged, and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent, and develop leaders at all levels to bring out the best in each other. Working for you Our inventions have revolutionized energy for over a century. But to keep going forward tomorrow, we know we must push the boundaries today. We prioritize rewarding those who embrace challenge with a package that reflects how much we value their input. Join us, and you can expect: Contemporary work-life balance policies and wellbeing activities. About Us With operations in over 120 countries, we provide better solutions for our customers and richer opportunities for our people. As a leading partner to the energy industry, we're committed to achieving net-zero carbon emissions by 2050 and we're always looking for the right people to help us get there. People who are as passionate as we are about making energy safer, cleaner, and more efficient. Join Us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will challenge and inspire you! About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward – making it safer, cleaner and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress? Join us and become part of a team of people who will challenge and inspire you! Let’s come together and take energy forward. Baker Hughes Company is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.

Posted 1 week ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Senior Data & Applied Scientist Noida, Uttar Pradesh, India Date posted Jul 14, 2025 Job number 1844835 Work site Microsoft on-site only Travel None Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Data Science Employment type Full-Time Overview Do you want to be on the leading edge of using big data and help drive engineering and product decisions for the biggest productivity software on the planet? Office Product Group (OPG) has embarked on a mission to delight our customers by using data-informed engineering to develop compelling products and services. OPG is looking for an experienced professional with a passion for delivering business value with data insights and analytics to join our team as a Data & Applied Scientist. We are looking for a strong Senior Data Scientist with a proven track record of solving large, complex data analysis problems in a real-world software product development setting. Ideal candidates should be able to take a business or engineering problem from a Product Manager or Engineering leader and translate it to a data problem. This includes all the steps to identify and deeply understand potential data sources, conduct the appropriate analysis to reveal actionable insights, and then operationalize the metrics or solution into PowerBI dashboards. You will be delivering results through innovation and persistence when similar candidates have given up. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 3+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techn OR Bachelor's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 5+ years data-science experience (e.g., managing structured and unstructured data, applying statistical tec OR equivalent experience. 2+ years customer-facing, project-delivery experience, professional services, and/or consulting experience. Preferred Qualifications: 7+ years of experience involving programming with languages Python/R and hands on experience using technologies such as SQL, Kusto, Databricks, Spark etc. 7+ years of experience working with data exploration and data visualization tools like PowerBI or similar. Candidate must be able to communicate complex ideas and concepts to leadership and deliver results. Candidate must be comfortable in manipulating and analyzing complex, high dimensional data from varying sources to solve difficult problems. Bachelors or higher degrees in Computer Science, Statistics, Mathematics, Physics, Engineering, or related disciplines. Responsibilities Dashboard Development and Maintenance: Design, build, and maintain interactive dashboards and reports in PowerBI to visualize key business metrics and insights. Work closely with stakeholders to understand their data visualization needs and translate business requirements into technical specifications. Data Extraction and Analysis: Perform ad-hoc data extraction and analysis from various data sources, including SQL databases, cloud-based data storage solutions, and external APIs. Ensure data accuracy and integrity in reporting and analysis. Deliver high impact analysis to diagnose and drive business critical insights to guide product and business development. Metric Development and Tracking: Be the SME who understand landscape of what data (telemetry) are and should be captured Advice feature teams on telemetry best practices to ensure business needs for data are met. Collaborate with product owners and other stakeholders to define and track key performance indicators (KPIs) and other relevant metrics for business performance. Identify trends and insights in the data to support decision-making processes. User Journey and Funnel Analysis: Assist product owners in mapping out user journeys and funnels to understand user behavior and identify opportunities for feature improvement. Develop and implement ML models to analyze user journeys and funnels. Utilize a variety of techniques to uncover patterns in user behavior that can help improve the product. Forecasting and Growth Analysis: Support the forecasting of key results (KRs) and growth metrics through data analysis and predictive modeling. Provide insights and recommendations to help drive strategic planning and execution. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies