Jobs
Interviews

6216 Databricks Jobs - Page 37

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are looking for an enthusiastic Machine Learning Engineer to join our growing team. The hire will be responsible for working in collaboration with other data scientists and engineers across the organization to develop production-quality models for a variety of problems across Razorpay. Some possible problems include : making recommendations to merchants from Razorpay’s suite of products, cost optimisation of transactions for merchants, automatic address disambiguation / correction to enable tracking customer purchases using advanced natural language processing techniques, computer vision techniques for auto-verifications, running large-scale bandit experiments to optimize Razorpay’s merchant facing web pages at scale, and many more. In addition to this, we expect the MLE to be adept at productionising ML models using state-of-the-art systems. As part of the DS team @ Razorpay, you’ll work with some of the smartest engineers/architects/data scientists/product leaders in the industry and have the opportunity to solve complex and critical problems for Razorpay. As a Senior MLE, you will also have the opportunity to partner with and be mentored by senior engineers across the organization and lay the foundation for a world-class DS team here at Razorpay. You come and work with the right attitude, fun and growth guaranteed! Required qualifications 5+ years of experience doing ML in a production environment and productionising ML models at scale Bachelors (required) or Masters in a quantitative field such as Computer science, operations research, statistics, mathematics, physics Familiarity with basic machine learning techniques : regression, classification, clustering, model metrics and performance (AUC, ROC, precision, recall and their various flavors) Basic knowledge of advanced machine learning techniques : regression, clustering, recommender systems, ranking systems and neural networks Expertise in coding in python and good knowledge of at least one language from C, C++, Java and at least one scripting language (perl, shell commands) Experience with big data tools like Spark and experience working with Databricks / DataRobots Experience with AWS’ suite of tools for production-quality ML work, or alternatively familiarity with Microsoft Azure / GCP Experience deploying complex ML algorithms to production in collaboration with engineers using Flask, MLFlow, Seldon, etc. Good to have: Excellent communication skills and ability to keep stakeholders informed of progress / blockers

Posted 1 week ago

Apply

2.0 years

4 - 5 Lacs

India

On-site

Urgent Hiring Job description Role & responsibilities Design and maintain scalable data pipelines using PySpark. Work with Databricks for seamless data workflows. Use Azure Data Factory (ADF) to orchestrate data movement. Collaborate with data scientists and analysts to meet data needs. Ensure data quality and troubleshoot any issues. Preferred candidate profile Bachelors degree in Computer Science, Engineering, or related field. Experience with PySpark, Databricks, and ADF. Familiarity with cloud platforms (Azure preferred). Strong SQL and NoSQL database skills. Excellent problem-solving abilities and a team player attitude. Perks and benefits Competitive salary and benefits. Professional growth opportunities. Collaborative and innovative work environment. Job Type: Full-time Pay: ₹400,000.00 - ₹500,000.00 per year Benefits: Flexible schedule Experience: ETL: 2 years (Required) Azure : 2 years (Required) Databricks: 2 years (Required) Work Location: In person

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Title: - Manager/Sr Manager – ETL - Pyspark Requisition ID: Job Location: -Pune Job Summary: - This role will be responsible for developing and maintaining data models to support data warehouse and reporting requirements. It requires a strong background in data engineering, excellent leadership capabilities, and the ability to drive projects to successful completion. Job Responsibilities: - Working experience in building Data Lake, DWH architecture using Databricks platform Engage with Client to participate in requirement gathering, Status update on work, UAT and be the key partner in the overall engagement Participates in ETL Design using any python framework of new or changing mappings and workflows with the team and prepares technical specifications Crafts ETL Mappings, Mapplets, Workflows, Worklets using Informatica PowerCenter Write complex SQL queries with performance tuning and optimization Should be able to handle task independently and lead the team Responsible for unit testing, integration testing and UAT as and when required Good communication Skills Coordinate with cross-functional teams to ensure project objectives are met. Collaborate with data architects and engineers to design and implement data models. Managing projects in fast paced agile ecosystem and ensuring quality deliverables within stringent timelines Responsible for Risk Management, maintaining the Risk documentation and mitigations plan. Drive continuous improvement in a Lean/Agile environment, implementing DevOps delivery approaches encompassing CI/CD, build automation and deployments. Communication & Logical Thinking – Demonstrates strong analytical skills, employing a systematic and logical approach to data analysis, problem-solving, and situational assessment. Capable of effectively presenting and defending team viewpoints, while securing buy-in from both technical and client stakeholders. Handle Client Relationship – Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Job Requirements: - Should have 7+ years of working experience in ETL & Data Warehousing Advanced knowledge of PySpark/python, pandas, numpy frameworks. Minimum 4 years of extensive experience in design, build and deployment of Spark/Pyspark - for data integration. Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations Create Spark jobs for data transformation and aggregation Spark query tuning and performance optimization - Good understanding of different file formats (ORC, Parquet, AVRO) to optimize queries/processing and compression techniques. Deep understanding of distributed systems (e.g. CAP theorem, partitioning, replication, consistency, and consensus) Experience in Modular Programming & Robust programming methodologies ETL knowledge and have done ETL development using any pyspark/Python framework Advanced SQL knowledge Ability to perform multiple task in continually changing environment Worked with Redshift/Synapse/Snowflake in the past Preferable. Good understanding and experience in the SDLC phases like the Requirements Specification, Analysis, Design, Implementation, Testing, Deployment and Maintenance Qualification: - BE/ B. Tech/ /M Tech/MBA Must have Skills: - Expertise in pharma commercial domain Proficiency in ETL using PySpark Strong experience in data warehousing Skills that give you an edge: - Experience in AWS or Azure cloud and its service offerings Excellent interpersonal/communication skills (both oral/written) with the ability to communicate at various levels with clarity & precision We will provide– (Employee Value Proposition) Offer an inclusive environment that encourages diverse perspectives and ideas Delivering challenging and unique opportunities to contribute to the success of a transforming organization Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online Axtria Institute, knowledge sharing opportunities globally, learning opportunities through external certifications Sponsored Tech Talks & Hackathons Possibility of relocating to any Axtria office for short and long-term projects Benefit package: -Health benefits -Retirement benefits -Paid time off -Flexible Benefits -Hybrid /FT Office/Remote Axtria is an equal-opportunity employer that values diversity and inclusiveness in the workplace. Who we are Axtria 14 years journey Axtria, Great Place to Work Life at Axtria Axtria Diversity

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At bpost group, operational excellence is driven by smart data and scalable systems. As a leading logistics and e-commerce player in Belgium and beyond, we rely on data to balance network capacity, optimize yield, and ensure efficient service delivery across every step of our value chain. We are looking for an Operational Data Engineer to strengthen our Yield and Capacity Management team. In this hands-on role, you will design, develop, and maintain the data infrastructure that powers real-time decision-making and performance tracking across our operational landscape. Your work will directly support forecasting models, pricing intelligence, and capacity planning tools that are critical to both day-to-day efficiency and long-term profitability. If you thrive in high-impact environments, have a deep understanding of data engineering in operational contexts, and want to help shape the future of logistics through data, then we want to hear from you. Role Summary We are seeking a highly skilled and detail-oriented Data Engineer with a specialization in operational reporting and dashboarding. The ideal candidate will have 5–8 years of experience in designing, developing, and maintaining data pipelines and visual analytics solutions that empower decision-making. This role requires a solid foundation in data modeling, ETL development, and BI tools, along with the ability to work cross-functionally to deliver high-impact reporting solutions. Key Responsibilities Data Pipeline Development and Maintenance Design, build, and optimize robust ETL pipelines to support operational reporting requirements Ensure data quality, consistency, and integrity across sources and reporting outputs Automate data ingestion from various internal and external systems Reporting and Dashboarding Develop and maintain dashboards and reports in BI tools (e.g., Power BI, Tableau, Looker) Collaborate with business stakeholders to translate requirements into effective visualizations Optimize dashboard performance and user experience through best practices Data Modeling and Architecture Create logical and physical data models that support scalable reporting solutions Participate in the design and implementation of data marts and operational data stores Work closely with data architects to align with enterprise data strategy Cross-Functional Collaboration Partner with analysts, product managers, and operations teams to define reporting KPIs Ensure consistent definitions and calculations across different business units Support ad hoc analytical requests and provide technical insights when needed Governance and Best Practices Implement and advocate for data governance practices including data cataloging and lineage Define and enforce reporting standards and data documentation Participate in peer code and dashboard reviews Qualifications Experience: 5–8 years of experience in data engineering or business intelligence engineering roles Proven track record in building scalable reporting systems and maintaining dashboards for operational use Technical Skills: Solid experience with SQL, capable of writing complex queries and understanding database structures across various SQL dialects (e.g., Oracle, MySQL, PostgreSQL). Strong experience with SQL, Python, and modern ETL frameworks (e.g., dbt, Apache Airflow) Understanding of data orchestration concepts and experience with Airflow (or similar tools like Prefect, Dagster). Proficiency in at least one BI tool (Power BI, Tableau, or Looker) or similar technology for dashboard and report development. Knowledge of cloud data platforms (AWS Redshift, Google BigQuery, Databricks, Snowflake, or Azure Synapse) Familiarity with version control and CI/CD pipelines for data Exposure to or understanding of streaming data concepts, ideally with Kafka. Soft Skills: Excellent communication and stakeholder management skills Strong problem-solving capabilities and attention to detail Ability to manage multiple projects and meet tight deadlines Preferred Skills Experience with real-time data processing frameworks (e.g., Kafka, Spark Streaming) Exposure to data observability and monitoring tools Understanding of data privacy and compliance requirements (e.g., GDPR, HIPAA)

Posted 1 week ago

Apply

3.0 years

0 Lacs

Goregaon, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Job Description: Proficiency with Microsoft Excel, Access, PowerPoint, Qliksense and SQL required . Develop & Maintain Qlik Sense Solutions : Design, develop, and manage interactive dashboards, reports, and applications using Qlik Sense. Data Modeling & Governance : Build and maintain data models to ensure accuracy, consistency, and integrity in reporting. SQL Development : Write and troubleshoot complex SQL queries for data extraction, transformation, and analysis. Qlik Sense Administration : Manage Qlik Sense environments, ensuring optimal performance, security, and access control. Requirement Gathering : Work closely with business stakeholders to understand requirements and translate them into BI solutions. Automation & Reporting : Implement automated reporting solutions using NPrinting and alerting features to improve efficiency. Agile & Kanban Execution : Lead BI projects using Agile methodologies, ensuring timely delivery and iterative improvements. Training & Mentorship : Conduct user training sessions, support business teams in utilizing Qlik Sense effectively, and mentor junior analysts. Collaboration with Leadership : Engage with technical and business leaders to refine BI solutions and enhance data- driven : 3-6 years of experience in Qlik Sense development and administration. Expertise in Qlik Sense with a strong understanding of data visualization and BI best practices. Strong SQL skills for query development and troubleshooting. Deep understanding of data modeling, data governance, and data warehousing concepts. Experience working in Agile environments (Kanban preferred). Ability to gather business requirements and translate them into actionable BI solutions. Excellent problem-solving and analytical skills with an innovative mindset. Strong communication skills to collaborate with business and technical teams effectively . Qualifications: Bachelor’s degree in Computer Science , Information Technology, or a related field. 3-6 years of experience in QlikSense development and data visualization, preferably within the manufacturing sector. Strong proficiency in data modeling, scripting, and data integration within QlikSense . Experience with SQL and relational databases, particularly those related to manufacturing data. Solid understanding of data warehousing concepts and business intelligence tools. Excellent analytical and problem-solving skills, with the ability to translate procurement data into insights. Strong communication and interpersonal skills to work effectively with stakeholders in production, operations, and supply chain. Ability to manage multiple projects and deliver results within deadlines. Mandatory skill sets: ‘Must have’ knowledge, skills and experiences MS Excel, Qliksense , SQL Preferred skill sets: ‘Good to have’ knowledge, skills and experiences Statistical analysis, SAP Analytics . Years of experience required : 6 to 9 years relevant experience Education qualification: BE, B.Tech , ME, M,Tech , MBA, MCA (60% above Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Overview: Description: Senior Full Stack Developer Position Overview: We are seeking a highly skilled Full Stack Developer to join our dynamic team within Global Trading at ExxonMobil. The ideal candidate will possess a robust understanding of both front-end and back-end development, with a strong emphasis on creating and maintaining scalable, high-performance applications. This role requires a professional who can seamlessly integrate into our team, contributing to the development of innovative solutions that drive our trading operations. To be eligible for this role, you must be able to demonstrate: • Strong communication and interpersonal skills • Ability to collaborate effectively with internal and external customers • Innovative and analytical thinking • Ability to manage workload under time pressure and changing priorities • Adaptability and willingness to learn new technologies and methodologies Required Skills and Qualifications: • Technical Proficiency: • Expert Front-end React Framework & Backend Python Experience • Proficient in front-end technologies such as HTML, CSS, Strong back-end development skills, or similar languages. • Proficient GIT, & CI/CD experience. • Develop and maintain web applications using modern frameworks and technologies • Help maintain code quality, organization, and automation • Experience with relational database management systems. • Familiarity with cloud services (AWS, Azure, or Google Cloud – Primarily Azure). • Industry Knowledge: • Experience in the oil and gas industry, particularly within trading operations, is highly desirable. • Understanding of market data, trading systems, and financial instruments related to oil and gas. Preferred Qualifications: • Certifications in relevant technologies or methodologies. • Proven experience in building, operating, and supporting robust and performant databases and data pipelines. • Experience with Databricks and Snowflake • Solid understanding of web performance optimization, security, and best practices

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Software Engineer II at JPMorganChase within the Corporate Technology, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job Responsibilities Supports review of controls to ensure sufficient protection of enterprise data Responsible for advising and making custom configuration changes in one to two tools to generate a product at the business or customer request Updates logical or physical data models based on new use cases Frequently uses SQL and understands NoSQL databases and their niche in the marketplace Develop the required data pipelines for moving the data from On-prem to AWS/Cloud platforms. Perform user acceptance testing and deliver demos to stakeholders by SQL queries or Python scripts. Perform data analysis to define / support model development including metadata and data dictionary documentation that will enable data analysis and analytical exploration Participate in strategic projects and provide ideas and inputs on ways to leverage quantitative analytics to generate actionable business insights and/or solutions to influence business strategies and identify opportunities to grow Partners closely with business partners to identify impactful projects, influence key decisions with data, and ensure client satisfaction Adds to team culture of diversity, equity, inclusion, and respect Work on innovative solutions using modern technologies and products to enhance customer experiences Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering* concepts and 2+ years applied experience hands-on development experience and knowledge of Cloud, preferably AWS Cloud. Hands-on experience in migrating relational Databases to NoSQL/Bigdata in Cloud. Experience across the data lifecycle Advanced at SQL (e.g., joins and aggregations, SQL analytical functions). Hands on experience in handling JSON data in SQL. Working understanding of NoSQL databases like TigerGraph, MongoDB, or any other NoSQL DB. Hands on experience in building BigData warehouse using applications. Hands on experience with cloud computing, AWS. Experience with query processing and tuning reports. Experience with ETL and processing real-time data. Experience with Big data technologies like PySpark. Preferred Qualifications, Capabilities, And Skills Databricks experience of 1-2 years. PySpark experience of 3-4 years. ETL, Datawarehouse, Lakehouse experience of 3-4 years. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Corporate Technology team relies on smart, driven people like you to develop applications and provide tech support for all our corporate functions across our network. Your efforts will touch lives all over the financial spectrum and across all our divisions: Global Finance, Corporate Treasury, Risk Management, Human Resources, Compliance, Legal, and within the Corporate Administrative Office. You’ll be part of a team specifically built to meet and exceed our evolving technology needs, as well as our technology controls agenda.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Exp - 8-11 Years Job Summary: We are looking for an experienced and visionary Lead Data Engineer to architect and drive the development of scalable, secure, and high-performance data solutions . This role requires deep technical expertise in Python, Apache Spark, Delta Lake, and orchestration tools like Databricks Workflows or Azure Data Factory . The ideal candidate will also bring a strong understanding of data governance, metadata management, and regulatory compliance in the insurance and financial services domains. Proficient in developing Python Applications, Spark-based workflows, leveraging Delta Lake, and orchestrating jobs using Databricks Workflows or Azure Data Factory. Good to understand retention metadata, business rules, and data governance policies into reusable pipelines. Strong understanding of data privacy, security, and regulatory needs in insurance and financial domains. Key Responsibilities: Lead the design and architecture of end-to-end data engineering solutions across cloud platforms. Develop and oversee robust data pipelines and ETL workflows using Python and Apache Spark. Architect and implement scalable Delta Lake solutions for structured and semi-structured data. Orchestrate complex workflows using Databricks Workflows or Azure Data Factory. Translate business rules, retention metadata, and data governance policies into reusable, modular, and scalable pipeline components. Ensure adherence to data privacy, security, and compliance standards (e.g., GDPR, HIPAA, etc.). Mentor and guide junior data engineers, fostering best practices in coding, testing, and deployment. Collaborate with cross-functional teams including data architects, analysts, and business stakeholders to align data solutions with business goals. Drive performance optimization, cost-efficiency, and innovation in data engineering practices. Required Skills & Qualifications: 8+ years of experience in data engineering, with at least 2 years in a lead or architect role. Expert-level proficiency in Python, Apache Spark, and Delta Lake. Strong experience with Databricks Workflows and/or Azure Data Factory. Deep understanding of data governance, metadata management, and business rule integration. Proven track record in implementing data privacy, security, and regulatory compliance in insurance or financial domains. Strong leadership, communication, and stakeholder management skills. Experience with cloud platforms such as Azure, AWS, or GCP. Preferred Qualifications: Experience with CI/CD pipelines and DevOps practices in data engineering. Familiarity with data cataloging and data quality tools. Certifications in Azure Data Engineering or related technologies. Exposure to enterprise data architecture and modern data stack tools.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Data Engineer Work Mode: Hybrid (3 Days from Office – Only 4 Hours Onsite per Day) Location: Gurgaon About the Role BayOne is looking for a skilled Data Engineer to join our dynamic team in Gurgaon. This hybrid role offers flexibility, with just 4 hours per day required in-office , 3 days a week. If you're passionate about building scalable data solutions using Azure and Databricks and thrive in a fast-paced environment, we'd love to hear from you. Key Responsibilities Design and build scalable data pipelines and data lake/warehouse solutions on Azure and Databricks . Work extensively with SQL , schema design, and dimensional data modeling . Develop and maintain ETL/ELT processes using tools like ADF, Talend, Informatica , etc. Leverage Azure Synapse, Azure SQL, Snowflake, Redshift, or BigQuery to manage and optimize data storage and retrieval. Utilize Spark, PySpark, and Spark SQL for big data processing. Collaborate cross-functionally to gather requirements, design solutions, and implement best practices in data engineering. Required Qualifications Minimum 5 years of experience in data engineering, data warehousing, or data lake technologies. Strong experience on Azure cloud platform (preferred over others). Proven expertise in SQL , data modeling, and data warehouse architecture. Hands-on with Databricks, Spark , and proficient programming in PySpark/Spark SQL . Experience with ETL/ELT tools such as Azure Data Factory (ADF) , Talend , or Informatica . Strong communication skills and the ability to thrive in a fast-paced, dynamic environment . Self-motivated, independent learner with a proactive mindset. Nice-to-Have Skills Knowledge of Azure Event Hub , IoT Hub , Stream Analytics , Cosmos DB , and Azure Analysis Services . Familiarity with SAP ECC, S/4HANA, or HANA data sources. Intermediate skills in Power BI , Azure DevOps , CI/CD pipelines , and cloud migration strategies . About BayOne BayOne is a 12-year-old software consulting company headquartered in Pleasanton, California . We specialize in Talent Solutions , helping clients build diverse and high-performing teams. Our mission is to #MakeTechPurple by driving diversity in tech while delivering cutting-edge solutions across: Project & Program Management Cloud & IT Infrastructure Big Data & Analytics Software & Quality Engineering User Experience Design Explore More: 🔗 Company Website 🔗 LinkedIn 🔗 Glassdoor Reviews Join us to shape the future of data-driven decision-making while working in a flexible and collaborative environment.

Posted 1 week ago

Apply

10.0 - 15.0 years

27 - 30 Lacs

Nagpur, Pune

Work from Office

10+ years in ETL / Data Engineering Strong in SQL, Python, Unix Shell, PL/I Tools: DataStage, Informatica, Databricks, Talend Experience with AWS, Spark, Hadoop Certifications (CSM, CSPO) and MTech in Data Science are a plus Required Candidate profile Strong in SQL, Python, Unix Shell, PL/I Certifications (CSM, CSPO) and MTech in Data Science are a plus

Posted 1 week ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Mercer Marsh Benefits Mercer Marsh Benefits is seeking candidates to support its vision of driving value to clients through data and analytics based insights. The following position is based in Mumbai. Mercer Marsh Benefits Analytics – Senior Principal Engineer / Manager - Data Quality Engineer Mercer Marsh Benefits ™ (MMB) is part of the Marsh McLennan family, bringing together a broad spectrum of expertise to help clients navigate the complex world of people risks, cost management and employee benefits. MMB is a global leader in the health and benefits marketplace. Operating in 135 countries, our team of specialists design benefits solutions that meet the needs of businesses and their people, drawing from global intelligence and adapting that wealth of experience to local markets. Mercer Marsh Benefit Analytics is the specialized and technologically advanced data analytics outfit curated to provide data driven insights to our clients in the health and employee benefits space. What can you expect? Joining a rapidly growing organization with an environment that fosters personal and professional development Opportunity to learn new tools and technology. Participate in building a solid data driven foundation for the organization. Work on breakthrough data and analytics products and projects that will create significant value for our clients Opportunity to make a difference to the world by being a part of the health and benefits industry. A chance to work with Industry leaders, global clients and access to latest trends in the industry. What is in it for you? Discover what's great about working at Marsh and McLennan Companies – from the opportunities that our size brings, to our commitment to our communities and understanding the benefits you’ll receive. We are four businesses with one PURPOSE: building the confidence to thrive through the power of perspective As a global leader in insurance broking and risk management, we are devoted to finding diverse individuals who are committed to the success of our clients and our organization. Joining us will provide a solid foundation for you to accelerate your career in the risk and insurance industry. We can promise you extraordinary challenges, extraordinary colleagues, and the opportunity to make a difference. Our rich history has created a client service culture that we believe is second to none. Our commitments to Diversity and Inclusion, Corporate Social Responsibility, and sustainability demonstrate our commitment to stand for what is right. As a Marsh and McLennan Company colleague, you will also receive additional benefits such as: A competitive salary Employee friendly policies Health care and insurance for you and your dependents Healthy Work life balance A great working environment Flexible benefits packages to suit your needs and lifestyle Future career opportunities across a global organization We will count on you to: To bring in your experience to help mature data quality offerings and lead data quality initiatives across MMB business. This role will report to the MMB Analytics Data Governance Leader. Roles and Responsibilities will include – Data Quality Management: Support team in implementing & driving end-to-end data quality framework and operating model using relevant DQ tools & capabilities. Help business in driving governance associated with data quality by enforcing policies & standards at both technical & functional level. Understand the data landscape, data flows and respective regulatory requirements throughout the organization, working with regions to overcome bottlenecks Implement strategies to improve data quality for various analytics products & platforms using methods such as:- data profiling, data cleansing, data enrichment & data validation Work with MMB business to understand & build data quality rules to address end-to-end business and data governance requirements Data Quality Monitoring & Issue Remediation: Develop solutions for automated DQ checks & threshold alerts Responsible for establishing data quality metrics, data quality audits. Support with issue identification, RCA and its remediation. Use data profiling techniques to identify DQ issues leveraging data quality dimensions Providing recommendations and guidelines for periodic data quality checks and drive governance to ensure long-term data trust and delivery of meaningful analytics Create score cards and reporting process to support data governance councils for monitoring progress and tracking blockers. Track KPIs around DQ dimensions Data Quality Implementation: Support projects with end-to-end DQ rules implementation and development activities. Collaboration: Collaborate with data engineers, data analysts, and data scientists to ensure data quality across all stages of the data lifecycle Develop relationships with various markets (APAC, UK & Europe, Latin America, Middle East & Canada) stakeholders to drive & enforce data quality initiatives. Work with data stewards (business & technical) to assess and apply the latest developments in data management and standards. Responsible for maintenance of MMB data & its metadata. Bring clarity in understanding as to what data means, who owns it, where it is stored, what’s its quality etc. Assist with data classification, data retention & disposal policies. Define & translate data security requirements into data policies & rules to meet data privacy requirements. Support stakeholders in improving data literacy and advocate data governance & data quality management adoption What you need to have: 8+ years of hands-on experience in data profiling, DQ rules implementation, DQ issue management, Data Quality Management, DQ metrics reporting & automation activities. Proven experience as a Data Quality Team Lead, Data Quality Engineer or similar role Masters/bachelor’s degree in information sciences/ engineering, or equivalent Strong experience in any enterprise data quality and cataloging tool (Informatica- IDQ ( Informatica Data Quality ), IDMC- CDI/CDQ, CDQ (Cloud Data Quality), CDI (Cloud Data Integration), IICS, Talend, Databricks, etc.) preferred Familiarity with AWS, Azure, or GCP and associated DQ capabilities. Understanding of cloud-based data storage and data pipeline architecture. Familiarity with languages such as SQL, Python, R etc. AI/ML: Basic understanding of AI/ML algorithms can help in building predictive models for data quality. Excellent written & verbal communication skills with demonstrated experience of working with international stakeholders What makes you stand out: Strong functional & technical skills on Data Quality Management Experience of Healthcare/ Insurance industry Experience of building relationships with stakeholders across the globe Demonstrated experience of executing data governance processes & improving data quality for analytics through use of right tools & capabilities

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job ID: Azu-ETP-Pun-1090 Location: Pune,Other The Role: As a Lead Data Engineer specializing in Databricks, you will be a key player in designing, developing, and optimizing our next-generation data platform. You will lead a team of data engineers, providing technical guidance, mentorship, and ensuring the scalable, and high-performance data solutions. Key Responsibilities Technical Leadership: Lead the design, development, and implementation of scalable and reliable data pipelines using Databricks, Spark, and other relevant technologies. Define and enforce data engineering best practices, coding standards, and architectural patterns. Provide technical guidance and mentorship to junior and mid-level data engineers. Conduct code reviews and ensure the quality, performance, and maintainability of data solutions. Databricks Expertise: Architect and implement data solutions on the Databricks platform, including Databricks Lakehouse, Delta Lake, and Unity Catalog. Optimize Spark workloads for performance and cost efficiency on Databricks. Develop and manage Databricks notebooks, jobs, and workflows. Proficiently use Databricks features such as Delta Live Tables (DLT), Photon, and SQL Analytics. Pipeline Development & Operations: Develop, test, and deploy robust ETL/ELT pipelines for data ingestion, transformation, and loading from various sources (e.g., relational databases, APIs, streaming data). Implement monitoring, alerting, and logging for data pipelines to ensure operational excellence. Troubleshoot and resolve complex data-related issues. Collaboration & Communication: Work closely with cross-functional teams including product managers, data scientists, and software engineers. Communicate complex technical concepts clearly to both technical and non-technical stakeholders. Stay updated with industry trends and emerging technologies in data engineering and Databricks Primary Skills Extensive hands-on experience with Databricks platform, including Databricks Workspace, Spark on Databricks, Delta Lake, and Unity Catalog. Strong proficiency in optimizing Spark jobs and understanding Spark architecture. Experience with Databricks features like Delta Live Tables (DLT), Photon, and Databricks SQL Analytics. Deep understanding of data warehousing concepts, dimensional modeling, and data lake architectures.

Posted 1 week ago

Apply

3.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job ID: Azu-ETP-Pun-1087 Location: Pune,Other Job Summary: We are seeking a skilled and experienced Azure Data Engineer to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and maintaining data solutions on the Azure cloud platform. The candidate should have a strong background in data engineering, including ETL processes, data warehousing, and database management. The role requires expertise in Azure services such as Azure Data Factory, Azure SQL Database, Azure Databricks, Microsoft Fabric and other relevant technologies. The Azure Data Engineer will collaborate with cross-functional teams to ensure the successful delivery of scalable and efficient data solutions. Minimum 3-4 Years of Experience in Data Integration platform on cloud (Azure preferred) Responsibilities Solution Design and Architecture: Collaborate with stakeholders to understand business requirements and design scalable data solutions on the Azure platform. Create and maintain data architecture diagrams and documentation. Data Ingestion and ETL: Develop and implement ETL processes using Azure Data Factory, Azure Databricks, and other relevant tools. Design and optimize data pipelines to ensure efficient data processing and transformation. Data Storage and Management: Implement and manage Azure SQL Database, Azure Data Lake Storage, and other Azure data storage solutions. Ensure data integrity, security, and compliance with industry best practices and organizational standards. Data Transformation and Processing: Utilize Azure Databricks and other tools for data transformation and processing. Optimize queries and data processing workflows for performance and efficiency. Monitoring and Optimization: Implement monitoring solutions to track the performance and health of data solutions. Identify and resolve performance bottlenecks and optimize data processing workflows. Collaboration and Communication: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders. Communicate effectively with technical and non-technical audiences, providing insights and recommendations. Automation and Scripting: Develop scripts and automation processes to streamline data engineering tasks. Implement infrastructure as code (IaC) for managing Azure resources. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer with a focus on Azure cloud technologies. In-depth knowledge of Azure Data Factory, Azure SQL Database, Azure Databricks, and other relevant Azure services. Strong proficiency in SQL, Python, and/or other scripting languages. Experience with data modeling, schema design, and database performance tuning. Familiarity with data warehousing concepts and best practices. Excellent problem-solving and analytical skills. Effective communication and collaboration skills. Relevant certifications such as Microsoft Certified: Azure Data Engineer Associate are a plus. If you meet these qualifications and are excited about the opportunity to work with cutting-edge Azure technologies to solve complex data challenges, we encourage you to apply. Join our team and be a key contributor to the success of our data-driven initiatives.

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

Ready to embark on a journey where your growth is intertwined with our commitment to making a positive impact? Join the Delphi family - where Growth Meets Values. At Delphi Consulting Pvt. Ltd. , we foster a thriving environment with a hybrid work model that lets you prioritize what matters most. Interviews and onboarding are conducted virtually, reflecting our digital-first mindset . We specialize in Data, Advanced Analytics, AI, Infrastructure, Cloud Security , and Application Modernization , delivering impactful solutions that drive smarter, efficient futures for our clients. About the Role: We are looking for a highly skilled Machine Learning Engineer with a strong MLOps focus and hands-on experience with Databricks to join our dynamic team. The ideal candidate will play a key role in managing end-to-end ML pipelines, deploying models in a production-grade Databricks environment, and ensuring smooth collaboration across a remote, cross-functional team. What you'll do: Design, develop, and manage ML pipelines within the Databricks ecosystem. Deploy, monitor, and retrain ML models in production. Work with Python , PySpark , Spark , and SQL for large-scale data processing and model development. Build and manage CI/CD pipelines in Azure DevOps for seamless model and code deployments. Develop and maintain LLM agents using diverse data sources including databases and knowledge graphs in a Databricks environment. Collaborate closely with remote data engineers , backend developers , and frontend developers to integrate ML models into applications. Perform code reviews and contribute to code quality standards. Create clear and concise documentation to track pipeline and model performance. What you'll bring: 2–3 years of hands-on experience with Databricks , including ML and data pipeline management. 1–2 years of experience building AI/LLM agents that connect with Databricks. Strong proficiency in Python , SQL , PySpark/Spark . 3+ years of experience with Git for version control and collaboration. 1–3 years of experience deploying machine learning models in production environments using Databricks. 2+ years of DevOps experience , ideally in cloud environments. What We Offer: At Delphi, we are dedicated to creating an environment where you can thrive, both professionally and personally. Our competitive compensation package, performance-based incentives, and health benefits are designed to ensure you're well-supported. We believe in your continuous growth and offer company-sponsored certifications, training programs , and skill-building opportunities to help you succeed. We foster a culture of inclusivity and support, with remote work options and a fully supported work-from-home setup to ensure your comfort and productivity. Our positive and inclusive culture includes team activities, wellness and mental health programs to ensure you feel supported.

Posted 1 week ago

Apply

8.0 years

132 - 180 Lacs

Hyderabad, Telangana, India

Remote

Location: Offshore (Onsite Delivery Model) Number of Roles: 3 Experience: 5–8 years Start Date: Immediate or within 15 days Employment Type: Contract / Full-time (as applicable) Job Overview We are seeking highly skilled GIS Data Engineers with strong expertise in Python , PySpark , and geospatial data processing to support the integration and analysis of spatial datasets. The selected candidates will work on large-scale data engineering projects in a distributed environment, focusing on building pipelines and workflows that support geospatial analytics and visualization. Key Responsibilities Develop and maintain scalable ETL pipelines using PySpark and Python for processing GIS and spatial datasets Work with large geospatial datasets from various sources (e.g., shapefiles, GeoJSON, raster formats, satellite data) Integrate geospatial data with enterprise systems (e.g., PostgreSQL/PostGIS, Big Data platforms) Optimize queries and data transformations for performance and accuracy Collaborate with data scientists, GIS analysts, and business users to deliver data solutions Implement best practices for data quality, versioning, and lineage in spatial data pipelines Troubleshoot data issues and support production data workflows Required Skills Strong hands-on experience with PySpark (Apache Spark with Python) Advanced Python programming skills, especially in data handling and automation Experience with geospatial libraries such as GeoPandas, Shapely, Fiona, Rasterio, or GDAL Proficient in working with GIS formats (shapefiles, raster data, GeoTIFFs, KML, etc.) Knowledge of spatial databases like PostGIS or GeoServer Hands-on experience with data lakes, big data processing, and cloud platforms (Azure/AWS/GCP – preferred) Strong understanding of data structures, algorithms, and spatial indexing Nice To Have Familiarity with MapReduce, Hive, or Databricks Prior experience in geospatial analytics, remote sensing, or urban planning domains Knowledge of DevOps, CI/CD pipelines, and containerization tools (Docker, Kubernetes) Soft Skills Excellent communication and collaboration abilities Problem-solving mindset and attention to detail Ability to work independently in a client-facing onsite role Skills: postgis,rasterio,azure,gcp,shapely,fiona,big data platforms,pyspark,gis,geopandas,gdal,aws,python

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description We are seeking an experienced Partner Marketing Manager to drive high-impact, joint marketing initiatives with our strategic partners — Databricks, Snowflake, Google Cloud, and Microsoft Azure . You’ll own end-to-end partner marketing plans that increase brand visibility, accelerate pipeline, and drive revenue growth across Cloud, Data & Analytics, AI/ML, and Generative AI solution areas. This is a global role requiring strong collaboration across internal stakeholders, partner teams, and regional field marketing leads to deliver campaigns that resonate in US, EMEA, and the APJ . You should be excited about working in a fast-paced, collaborative environment, and comfortable with flexible hours to align with global teams. Role Exposure: Global (Preferred working flexibility across APJ, EMEA, and US markets) Job Type: Full-Time Experience: 7+ years in B2B IT business, with minimum 3 years in partner marketing Reports To: Head of Partner Marketing / Marketing Director Roles & Responsibilities Key Responsibilities Develop and execute integrated partner marketing plans aligned to joint business goals related to demand generation and pipeline influence with Databricks, Snowflake, Google Cloud, and Microsoft Azure. Build and manage joint go-to-market (GTM) initiatives including solution messaging, campaign assets, landing pages, case studies, sales plays, and enablement collateral. Partner closely with sales, alliances, product marketing, and field marketing teams to drive awareness, demand generation, and pipeline acceleration. Plan and execute joint events , webinars, roundtables, digital campaigns, and thought leadership content to engage target audiences. Manage co-branded marketing activities, partner MDF (Market Development Funds), and report on ROI and key metrics. Cultivate and maintain strong relationships with partner marketing counterparts to unlock new co-marketing opportunities and amplify joint wins. Ensure partner branding and messaging guidelines are met across all touchpoints. Monitor market trends and competitor activities to help shape differentiated messaging and positioning. Collaborate with analysts, PR, and social teams to maximize partner-related announcements, awards, and press releases. Key Requirements 7+ years of experience in B2B IT marketing , with at least 3 years dedicated to partner marketing in the Cloud, Data & Analytics, AI/ML, or GenAI domains. Proven experience managing partnerships with at least two or more of Databricks, Snowflake, Google Cloud, or Microsoft Azure. Strong understanding of cloud ecosystem GTM models, co-selling motions, and MDF utilization. Experience planning and executing integrated marketing campaigns and events across APJ, EMEA, and US markets. Excellent project management, communication, and stakeholder management skills. Comfortable working cross-functionally in a global environment and adjusting to different time zones as needed. Ability to analyze and report on marketing KPIs and make data-driven decisions. Strong written and verbal communication skills with the ability to develop high-quality content. MBA in Marketing, Business, or related field

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As an AWS Data Engineer at Quest Global, you will be responsible for designing, developing, and maintaining data pipelines while ensuring data quality and integrity within the MedTech industry. Your key responsibilities will include designing scalable data solutions on the AWS cloud platform, developing data pipelines using Databricks and PySpark, collaborating with cross-functional teams to understand data requirements, optimizing data workflows for improved performance, and ensuring data quality through validation and testing processes. To be successful in this role, you should have a Bachelor's degree in Computer Science, Engineering, or a related field, along with at least 6 years of experience as a Data Engineer with expertise in AWS, Databricks, PySpark, and S3. You should possess a strong understanding of data architecture, data modeling, and data warehousing concepts, as well as experience with ETL processes, data integration, and data transformation. Excellent problem-solving skills and the ability to work in a fast-paced environment are also essential. In terms of required skills and experience, you should have experience in implementing Cloud-based analytics solutions in Databricks (AWS) and S3, scripting experience in building data processing pipelines with PySpark, and knowledge of Data Platform and Cloud (AWS) ecosystems. Working experience with AWS Native services such as DynamoDB, Glue, MSK, S3, Athena, CloudWatch, Lambda, and IAM is important, as well as expertise in ETL development, analytics applications development, and data migration. Exposure to all stages of SDLC, strong SQL development skills, and proficiency in Python and PySpark development are also desired. Additionally, experience in writing unit test cases using PyTest or similar tools would be beneficial. If you are a talented AWS Data Engineer looking to make a significant impact in the MedTech industry, we invite you to apply for this exciting opportunity at Quest Global.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a highly skilled Full Stack Developer with at least 8 years of relevant experience. You will be joining our dynamic team, and your role will involve a deep understanding of both front-end and back-end development. Your focus will be on creating and maintaining scalable, high-performance applications. You should have expertise in Front-end React Framework and Backend Python. Proficiency in front-end technologies such as HTML, CSS, and strong back-end development skills are mandatory. Additionally, familiarity with SQL is required. Your work location will be in either Chennai or Bangalore, with working hours from 2:00 PM to 11:00 PM. The interview process will consist of 3 levels, including a Glider test with a minimum cut-off of 70% and 2 rounds of Technical Interview. To excel in this role, you must possess strong communication and interpersonal skills. Collaboration with both internal and external stakeholders is crucial. Your ability to think innovatively and analytically, manage workload under time pressure, and adapt to changing priorities will be key. You should also be willing to learn new technologies and methodologies. In terms of technical proficiency, you should have expertise in Front-end React Framework and Backend Python. Proficiency in front-end technologies like HTML, CSS, and strong back-end development skills is essential. Experience with GIT, CI/CD, modern frameworks, and technologies for web applications is required. You will also be responsible for maintaining code quality, organization, and automation. Knowledge of relational database management systems and cloud services, particularly Azure, is preferred. Having experience in the oil and gas industry, especially in trading operations, will be highly desirable. Understanding market data, trading systems, and financial instruments related to oil and gas is beneficial. Preferred qualifications include certifications in relevant technologies, proven experience in building and supporting databases and data pipelines, familiarity with Databricks and Snowflake, understanding of web performance optimization and security best practices, experience in supporting Power BI dashboards, and the ability to work as an individual contributor with outstanding communication skills.,

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role: Data Architect (Databricks) Location: Pune (WFO) · Total Experience: 10 to 15 years · 8+ years of experience in data engineering/architecture roles. · 5+ years’ experience with Databricks · Strong SQL and performance tuning skills. · Experience with data orchestration tools like Airflow, ADF, or dbt. · Proficient in Spark (PySpark or Scala) and Python. · Experience working with cloud platforms (AWS, Azure, or GCP). · Familiar with DevOps practices for data platforms (Git, CI/CD, Infrastructure-as-Code). · Solid understanding of data governance, security best practices, and data quality frameworks. · Excellent communication and documentation skills.

Posted 1 week ago

Apply

4.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Passionate people create exceptional things Did you know that the solutions we develop are a key part of most industries? Electronics, medical research, renewable energy, food production, infrastructure and many more. We´re everywhere! Working with us means working with the latest technologies and groundbreaking, sustainable innovations. With our inclusive and caring environment, you get the support and inspiration you need to grow. Here, your ideas are embraced, and you never stop learning. Interested in being part of our team? Join us on our journey for a better tomorrow. Your Role Atlas Copco is a leading global Industrial Group having vision of "First in Mind, First in Choice". Our culture is founded on three core values of commitment, interaction and innovation. They guide us in everything we do and reflect how we behave internally and in our relationships with our stakeholders. Digital Technology Practice at GECIA (Global Engineering Centre, India), part of Atlas Copco, is contributing significantly to vision of the Group by focusing on developing global talent, expanding portfolio and competence on futuristic competencies, driving Innovation and achieving operational excellence to drive customer success. As a Sr.BI Engineer your mission will be to design, development, and delivery for highly visible data-driven applications . You must take a quality-first approach where you ensure the data is visualized in a clear, accurate and user-friendly manner. You ensure standards and best practices are followed and ensure documentation is created and maintained. Where needed, you take initiative and make recommendations to drive improvements. You will also be involved in the tracking, monitoring and performance analysis of production issues and the implementation of bugfixes and enhancements. Examples of technologies you will be working with: Azure Data Lake Storage, Azure Databricks, Azure SQL, Azure Dev Ops, Microsoft Power BI & Power Platforms. You would be a part of Data Analytics Team of Global Engineering Center (GECIA) of Atlas Copco India Private Ltd. As a Sr. BI Engineer within our Data Anlytics Competence team, you will lead the management and organisation of data for our custom-built applications and bringing the Data Driven meaningful insights. You will be instrumental at every stage of the project lifecycle from proof of concept, data validation, design and develope the BI visuals, deploying the reports/solution to monitoring of performance of Visualization products. Stakeholder Management Key responsibilities and responsibility: Discuss with stakeholders about their requirements and clearly communicate back your understanding of the requirements. Setting business requirements & work with the stakeholders to understand the big picture. Work towards it. Educate users in regard to the functionalities provided and train them in using the visualization tool efficiently Work with team members and other users from the organization at different levels for performance improvement and suggestions. Reporting And Visualization Understand business requirements in BI context and design data models to transform raw data into meaningful insights. Converting business requirements in to techincal tasks or work breakdown. BI software development. Create dashboards and interactive visual reports using Power BI. Analyze data and present data through reports that aid decision-making. Create relationships between data and develop tabular and other multidimensional data models. Creating smart documentation for BI interface, algorithms, parameters, models, and relations. Design, develop, test, and deploy Power BI scripts and perform detailed analytics. Analyze current ETL process, define and design new systems. Promote the use of standard Microsoft visualizations. To succeed, you will need Skills & Experience The ideal candidate will have a good blend of business and technical skills. Specific requirements for this position include: General Skills You have a bachelor’s or master’s degree in computer science, engineering, or equivalent by experience You have a minimum work experience of 4-5 years with professional BI development & data visualization is preferred. Strong knowledge on BI development principles, time intelligence functions, dimensional modelling and data visualization is required. You are a Power BI front-end developer that can develop performant data sets and appealing, insightful visualizations. Knowledge on MS Azure (data lake, databricks) is considered as a plus. You are experienced in data modelling and custom calculations. You are able to create complex visuals and KPI charts that are used on enterprise level. You have a good understanding of data warehousing and relational database concepts & technologies. Experience and knowledge on scripting languages such as PowerShell and Python to setup and automate Power BI platform related activities is an asset. Excellent knowledge (oral and written) of English is required. Experience and understanding of agile methodologies and the SCRUM framework are a plus. Experience with tools like Jira and confluence is a plus. Important Areas Of Expertise Storage: Best of breed (Cloud) storage solutions, both for unstructured and structured data Ingestion: Streaming and batch ingestion from source systems both in the cloud and on-premises, private and public Transformation: Code-first approach using python Analytic engines: Deploy, configure, and operationalize analytics engines and its clustered infrastructure Exposure/Integration: Expose data based on consumption use-cases like data science, self-service analytics, business intelligence and operational APIs for process integration Data Platform: Health, logging, monitoring, debugging, automation Competences You are a team player and able to motivate people You are a customer focused, flexible, enthusiastic. You have good leadership skills. You have very strong analytical skills. Understanding of manufacturing and Engineering process & flows is seen as added value. You have a strong drive, and you take initiative. You are flexible, eager to learn new things and able to adapt in a fast-changing world. You are result oriented and quality focused, both in terms of deliveries and processes. You work systematically and accurately with a strong focus on user experience. In return, we offer you What We Offer Flexible working hours Flexible office and home working policies A modern infrastructure providing you with the latest tools and technologies at your disposal A challenging environment which contributes to your constant learning and professional growth To be part of the data competence team which is constantly growing Depending on the country you are enrolled in group healthcare insurance plans covering your medical needs A chance to become part of a global, innovative company, supporting sustainable productivity You get the opportunity to bring revolutionary ideas fostering innovation and execute qualified ideas A friendly culture with immense professional and personal development, education and opportunities for career growth Free access to LinkedIn Learning and many other internal and external trainings Atlas Copco offers trainings on a regular basis to acquire new skills You get the opportunity to make the world a better place through our sustainable goals and by contributing and being part of our Water for all projects. Friendly and open culture of Swedish company. Very high visibility in the organization with "No door" culture, you can always talk to anyone in the organization City Pune Recruiter shreya.pore@atlascopco.com Hiring Manager Manoj Sidhapura Last Day to Apply 04/02/2025 Diverse by nature and inclusive by choice Bright ideas come from all of us. The more unique perspectives we embrace, the more innovative we are. Together we build a culture where difference is valued and we share a deep sense of purpose and belonging.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You are a strategic thinker passionate about driving solutions in business architecture and data management. You have found the right team. As a Banking Book Product Owner Analyst in our Firmwide Finance Business Architecture (FFBA) team, you will spend each day defining, refining, and delivering set goals for our firm. You will partner with stakeholders across various lines of business and subject matter experts to understand products, data, source system flows, and business requirements related to Finance and Risk applications and infrastructure. As a Product Owner on the Business Architecture team, you will work closely with Line of Business stakeholders, data Subject Matter Experts, Consumers, and technology teams across Finance, Credit Risk & Treasury, and various Program Management teams. Your primary responsibilities will include prioritizing the traditional credit product book of work, developing roadmaps, and delivering on multiple projects and programs during monthly releases. Your expertise in data analysis and knowledge will be instrumental in identifying trends, optimizing processes, and driving business growth. As our organization grows, so does our reliance on insightful, data-driven decisions. You will dissect complex datasets to unearth actionable insights while possessing a strong understanding of data governance, data quality, and data management principles. Utilize Agile Framework to write business requirements in the form of user stories to enhance data, test execution, reporting automation, and digital analytics toolsets. Engage with development teams to translate business needs into technical specifications, ensuring acceptance criteria are met. Drive adherence to product and Release Management standards and operating models. Manage the release plan, including scope, milestones, sourcing requirements, test strategy, execution, and stakeholder activities. Collaborate with lines of business to understand products, data capture methods, and strategic data sourcing into a cloud-based big data architecture. Identify and implement solutions for business process improvements, creating supporting documentation and enhancing end-user experience. Collaborate with Implementation leads, Release managers, Project managers, and data SMEs to align data and system flows with Finance and Risk applications. Oversee the entire Software Development Life Cycle (SDLC) from requirements gathering to testing and deployment, ensuring seamless integration and execution. Required qualifications, capabilities, and skills Bachelors degree with 3+ years of experience in Project Management or Product Ownership, with a focus on process re-engineering. Proven experience as a Product Owner with a strong understanding of agile principles and delivering complex programs. Strong analytical and problem-solving abilities, with the capacity to quickly assimilate business and technical knowledge. Experience in Finance, Risk, or Operations as a Product Lead. Familiarity with Traditional Credit Products and Liquidity and Credit reporting data. Highly responsible, detail-oriented, and able to work with tight deadlines. Excellent written and verbal communication skills, with the ability to articulate complex concepts to diverse audiences. Strong organizational abilities to manage multiple work streams concurrently, maintaining sound judgment and a risk mindset. Solid understanding of financial and regulatory reporting processes. Energetic, adaptable, self-motivated, and effective under pressure. Basic knowledge of cloud technologies (e.g., AWS). Preferred qualifications, capabilities, and skills Knowledge of JIRA, SQL, Microsoft suite of applications, Databricks and data visualization/analytical tools (Tableau, Alteryx, Python) is a plus. Knowledge and experience of Traditional Credit Products (Loans, Deposits, Cash etc.,) and Trading Products (Derivatives and Securities) a plus.,

Posted 1 week ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Senior Azure Data Engineer Experience Required: 8+ Years Work Location: Hybrid (Noida) Employment Type: Full-Time Job Summary: We are looking for a highly experienced Azure Data Engineer to join our growing data engineering team. The ideal candidate will possess deep expertise in building and managing scalable, secure, and high-performance data platforms on Microsoft Azure. This role involves designing and optimizing data pipelines, implementing data governance, and enabling analytics and business intelligence solutions across the organization. Key Responsibilities: Design, build, and maintain robust data pipelines and ETL/ELT solutions using Azure Data Factory , Azure Databricks , Azure Synapse Analytics , and related services. Architect scalable data lake and data warehouse solutions on Azure Data Lake Storage (ADLS) and Azure SQL / Synapse . Optimize data workflows for performance, reliability, and cost-efficiency. Implement data governance practices, ensuring data quality, lineage, and security across all layers of the data stack. Work closely with data scientists, analysts, and business stakeholders to understand and translate data requirements into scalable solutions. Ensure end-to-end monitoring, logging, and performance tuning of data pipelines. Contribute to automation efforts using CI/CD pipelines with Azure DevOps or equivalent tools. Create and maintain detailed technical documentation for architecture, pipelines, and governance. Required Skills & Experience: Minimum 8 years of hands-on experience in data engineering roles. Proven expertise with Azure data services , including: Azure Data Factory (ADF) Azure Databricks / Apache Spark Azure Synapse Analytics Azure Data Lake Storage (Gen2) Azure SQL Database / Dedicated SQL Pool Strong SQL skills, including data modeling and query optimization. Proficiency in Python , PySpark , or Scala for data engineering. Solid understanding of CI/CD practices and experience using Azure DevOps . Good understanding of data governance , data privacy , and security principles in cloud environments. Experience with monitoring and logging tools for data pipeline health and diagnostics. Nice to Have: Familiarity with Infrastructure-as-Code tools like Terraform or ARM templates . Exposure to Power BI or other business intelligence platforms. Understanding of event-driven architectures using Azure Event Hubs , Kafka , or Service Bus . Experience working with machine learning pipelines or supporting data science teams. Educational Qualifications: Bachelor’s or Master’s degree in Computer Science , Information Systems , or a related field.

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Current scope and span of work: Summary : Need is for a data engineer to handle day-to-day activities involving data ingestion from multiple source locations, help identify data sources, to troubleshoot issues, and engage with a third-party vendor to meet stakeholders’ needs. Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible with current EMIT practices) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Behavioral Skills demonstrated: 1. Excellent communication skills 2. Ability to receive direction from a Lead and implement 3. Prior experience working in an Agile setup, preferred 4. Experience troubleshooting technical issues and quality control checking of work 5. Experience working with a globally distributed team in different

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Roles and responsibilities: Design and implement data pipelines for supply chain data (e.g., inventory, shipping, procurement). Develop and maintain data warehouses and data lakes.  Ensure data quality, integrity, and security. Collaborate with supply chain stakeholders to identify analytics requirements. Develop data models and algorithms for predictive analytics (e.g., demand forecasting, supply chain optimization). Implement data visualization tools (e.g., Tableau, Power BI). Integrate data from various sources (e.g., ERP, PLMs, other data sources). Develop APIs for data exchange.  Work with cross-functional teams (e.g., supply chain, logistics, IT). Communicate technical concepts to non-technical stakeholders. Experience with machine learning algorithms & concepts Knowledge of data governance and compliance. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work in a fast-paced environment. Technical Skills: Bachelor's degree in Computer Science, Information Technology, or related field. 7+ years of experience in data engineering. Proficiency in: Programming languages - Python, Java, SQL, Spark SQL. Data technologies - Hadoop, PySpark, NoSQL databases. Data visualization tools - Qliksense, Tableau, Power BI Cloud platforms - Azure Data Factory, Azure Databricks, AWS

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join us as an AWS Developer at Barclays, where you will play a crucial role in supporting the successful delivery of Location Strategy projects. Your responsibilities will include ensuring that projects are completed within the planned budget, meeting quality standards, and adhering to governance protocols. You will be at the forefront of transforming our digital landscape, driving innovation, and striving for excellence to enhance our digital offerings and provide customers with exceptional experiences. As an AWS Developer, your key experience should encompass: - Proficiency in AWS cloud services like S3, Glue, Athena, Lake Formation, and CloudFormation. - Advanced skills in Python for data engineering and automation. - Familiarity with ETL frameworks, data transformation, and data quality tools. Additionally, highly valued skills may involve: - AWS Data Engineer certification. - Previous experience in the banking or financial services sector. - Knowledge of IAM and Permissions management in AWS cloud. - Experience with Databricks, Snowflake, Starburst, and Iceberg. Your performance will be evaluated based on essential skills crucial for success in this role, including risk and controls management, change and transformation capabilities, strategic thinking, and proficiency in digital and technology. You will also be assessed on job-specific technical skills relevant to the position. This position is located in Pune and aims to: - Develop and maintain systems for collecting, storing, processing, and analyzing data to ensure accuracy, accessibility, and security. - Build and manage data architectures pipelines for transferring and processing data effectively. - Design data warehouses and data lakes that handle data volumes, velocity, and security requirements. - Develop algorithms for processing and analyzing data of varying complexity and volumes. - Collaborate with data scientists to construct and deploy machine learning models. As an Assistant Vice President, you are expected to provide guidance, influence decision-making processes, contribute to policy development, and ensure operational efficiency. You will lead a team in executing complex tasks, set objectives, coach employees, evaluate performance, and determine reward outcomes. If you have leadership responsibilities, you must exhibit leadership behaviors such as listening, inspiring, aligning, and developing others. For individual contributors, the role involves leading collaborative assignments, guiding team members, identifying the need for specialized input, proposing new directions for projects, and consulting on complex issues. You will also be responsible for risk management, policy development, and ensuring compliance with governance standards. All colleagues are required to embody the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as demonstrate the Barclays Mindset of Empower, Challenge, and Drive, guiding our behavior and decisions.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies