Home
Jobs

3827 Databricks Jobs - Page 8

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

1 - 2 Lacs

Gurgaon

On-site

GlassDoor logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Senior Data Engineer Job Description Product Data & Analytics Team Senior Data Engineer – Product Data & Analytics Overview Product Data & Analytics team builds internal analytic partnerships, strengthening focus on the health of the business, portfolio and revenue optimization opportunities, initiative tracking, new product development and Go-To Market strategies. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data driven decision making? Are you motivated to be part of a Global Analytics team that builds large scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data analytics in the company? The ideal candidate has a knack for seeing solutions in sprawling data sets and the business mindset to convert insights into strategic opportunities for our company. Role & Responsibilities Work closely with global & regional teams to architect, develop, and maintain data engineering, advanced reporting and data visualization capabilities on large volumes of data to support analytics and reporting needs across products, markets and services. Obtain data from multiple sources, collate, analyze, and triangulate information to develop reliable fact bases. Effectively use tools to manipulate large-scale databases, synthesizing data insights. Execute cross-functional projects using advanced modeling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities. Build, develop and maintain data models, reporting systems, dashboards and performance metrics that support key business decisions. Extract intellectual capital from engagement work and actively share tools, methods and best practices across projects Provide 1st level insights/conclusions/assessments and present findings via Tableau/PowerBI dashboards, Excel and PowerPoint. Apply quality control, data validation, and cleansing processes to new and existing data sources. Lead, mentor and guide more junior team members. Communicate results and business impacts of insight initiatives to stakeholders in leadership, technology, sales, marketing and product teams. Bring your Passion and Expertise All About You Experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis Financial Institution or a Payments experience a plus Experience presenting data findings in a readable and insight driven format. Experience building support decks. Advanced SQL skills, ability to write optimized queries for large data sets (Big data) Experience on Platforms/Environments: Cloudera Hadoop, Big data technology stack, SQL Server, Microsoft BI Stack Experience with data visualization tools such as Looker, Tableau, PowerBI Experience with Python, R, Databricks a plus Experience on SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS) will be an added advantage Excellent problem solving, quantitative and analytical skills In depth technical knowledge, drive and ability to learn new technologies Strong attention to detail and quality Team player, excellent communication skills Must be able to interact with management, internal stakeholders and collect requirements Must be able to perform in a team, use judgment and operate under ambiguity Education Bachelor’s or master’s Degree in a Computer Science, Information Technology, Engineering, Mathematics, Statistics Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills Analytical/Problem Solving Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency Project Management/Risk Mitigation Able to prioritize and perform multiple tasks simultaneously Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Description NielsenIQ is a consumer intelligence company that delivers the Full View™, the world’s most complete and clear understanding of consumer buying behavior that reveals new pathways to growth. Since 1923, NIQ has moved measurement forward for industries and economies across the globe. We are putting the brightest and most dedicated minds together to accelerate progress. Our diversity brings out the best in each other so we can leave a lasting legacy on the work that we do and the people that we do it with. NielsenIQ offers a range of products and services that leverage Machine Learning and Artificial Intelligence to provide insights into consumer behavior and market trends. This position opens the opportunity to apply the latest state of the art in AI/ML and data science to global and key strategic projects Job Description We are looking for a Research Scientist with a data-centric mindset to join our applied research and innovation team. The ideal candidate will have a strong background in machine learning, deep learning, operationalization of AI/ML and process automation. You will be responsible for analyzing data, researching the most appropriate techniques, and the development, testing, support and delivery of proof of concepts to resolve real-world and large-scale challenging problems. Job Responsibilities Develop and apply machine learning innovations with minimal technical supervision. Understand the requirements from stakeholders and be able to communicate results and conclusions in a way that is accurate, clear and winsome. Perform feasibility studies and analyse data to determine the most appropriate solution. Work on many different data challenges, always ensuring a combination of simplicity, scalability, reproducibility and maintainability within the ML solutions and source code. Both data and software must be developed and maintained with high-quality standards and minimal defects. Collaborate with other technical fellows on the integration and deployment of ML solutions. To work as a member of a team, encouraging team building, motivation and cultivating effective team relations. Qualifications Essential Requirements Bachelor's degree in Computer Science or an equivalent numerate discipline Demonstrated senior experience in Machine Learning, Deep Learning & other AI fields Experience working with large datasets, production-grade code & operationalization of ML solutions EDA analysis & practical hands-on experience with datasets, ML models (Pytorch or Tensorflow) & evaluations Able to understand scientific papers & develop the idea into executable code Analytical mindset, problem solving & logical thinking capabilities Proactive attitude, constructive, intellectual curiosity & persistence to find answers to questions A high level of interpersonal & communication skills in English & strong ability to meet deadlines Python, Pytorch, Git, pandas, dask, polars, sklearn, huggingface, docker, databricks Desired Skills Masters degree &/or specialization courses in AI/ML. PhD in science is an added value Experience in MLOPs (MLFlow, Prefect) & deployment of AI/ML solutions to the cloud (Azure preferred) Understanding & practice of LLMs & Generative AI (prompt engineering, RAG) Experience with Robotic Process Automation, Time Series Forecasting & Predictive modeling A practical grasp of databases (SQL, ElasticSearch, Pinecone, Faiss) Previous experience in retail, consumer, ecommerce, business, FMCG products (NielsenIQ portfolio) Additional Information With @NielsenIQ, we’re now an even more diverse team of 40,000 people – each with their own stories Our increasingly diverse workforce empowers us to better reflect the diversity of the markets we measure. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Become a part of the InfiniteDATA team to build, optimize, and maintain big-scale data warehouses for the best-known brands in the Polish and European markets. Headquartered in Warsaw, Poland, with points of contact and support throughout Europe, Asia, and America, InfiniteDATA serves some of the world's largest Enterprises in Banking, Insurance, Fintech, Telco, Manufacturing, Retail, Energy & Utilities, and Pharma sectors. We seek a skilled Data Architect to design, develop, and implement data pipelines using Databricks and PySpark. In this role, you will work on building large-scale, complex data sets that meet business requirements while ensuring high data quality and consistency. Key Responsibilities: Design, build, and maintain robust data pipelines to acquire, cleanse, transform, and publish data to a Databricks backend. Assemble and manage large datasets tailored to both functional and non-functional business needs. Collaborate with data asset managers and architects to ensure data solutions align with architectural standards and are fit for use. Apply coding best practices and standards to ensure the delivery of efficient and reusable components and services. Provide Level 3 (L3) support for developed solutions, including troubleshooting and bug fixing. Qualifications: Strong proficiency in PySpark for data processing and transformation. Extensive hands-on experience with Databricks , including notebook development, cluster management, and job scheduling. Experience with Microsoft Azure is highly desirable; knowledge of Google Cloud Platform (GCP) is a plus. Solid understanding of data modeling , data warehousing , and dimensional modeling techniques. Knowledge of data integration patterns , data lakes , and data quality best practices. Proficient in SQL for querying, data manipulation, and performance optimization. Experience designing and optimizing ETL/data pipeline workflows using PySpark , Databricks , and Airflow . Familiarity with orchestration tools such as Airflow and Databricks Workflows. Exposure to handling and processing media data is a strong advantage. Perks? Here we go! We are happy to share our know-how and provide certification. Grounded relationship with the client and good working atmosphere. Real career development opportunities. 100% remote work or hybrid model (you decide). Medical care (PZU Zdrowie or Luxmed). Sport card (Multisport). Training and certification budget. Employee referral program. Comfortable and quiet office in the city center (Rondo Daszyńskiego). The recruitment process will look like: Upon receipt of resumes, selected individuals will be contacted by our HR department. After a short conversation about your experience and expectations, the HR department will direct you to a technical meeting with one of our Managers or Architects. After the technical meeting, the Recruiter will get back to you with feedback, and together you will determine the next steps. No need to wait, leave us your resume at the link. We would love to take a look at it and get in touch with you 👇🤳

Posted 2 days ago

Apply

5.0 - 8.0 years

13 - 16 Lacs

Gāndhīnagar

On-site

GlassDoor logo

Company Name : PIB Techco India Pvt Ltd Location: Gandhinagar, Gujarat Job title: Sr. Devops Engineer Requirements: Must have: We are seeking a highly skilled DevOps Engineer with 5–8 years of professional hand on experience, particularly in managing Azure DevOps CI/CD pipelines and automating deployments across cloud-based data solutions. The ideal candidate should be capable of handling end-to-end deployment processes for Azure Devops projects involving Azure Data Factory (ADF), Databricks, SQL, Python, Azure Data Lake Storage (ADLS) and Power BI repositories Key Responsibilities: - Design, implement, and manage automated deployment pipelines for ADF, Databricks notebooks, SQL scripts, Python-based data processing and Power BI projects. - Manage build and release pipelines for various environments including Dev, UAT, and Production. - Enable environment consistency across Dev, UAT, and Production with automated application deployments using Azure CI/CD Pipelines, PowerShell, and CLI scripts. - Proficient in Python, Bash, or PowerShell - Collaborate with dataops and data engineering teams to enable smooth integration and deployment across Dev, UAT, and production environments. - Monitor pipeline health and performance, troubleshoot deployment failures, and ensure version control and rollback mechanisms are in place. - Support end-to-end project delivery including requirement gathering, pipeline design, development, testing automation, deployment, and post-deployment support. - Implement robust branching strategies, Git workflows, and automated testing frameworks. - Maintain version control practices using Azure DevOps Repos. - Monitor, log, and troubleshoot deployment issues using Azure Monitor, Log Analytics, or Cloud-native tools Nice to have: - Familiarity with Azure Data Factory (ADF), Databricks, SQL, Python, Azure Data Lake Storage (ADLS) and Power BI repositories,Docker, Kubernetes, or managed services like AKS/EKS. - Experience working with Agile methodologies, Test-Driven Development (TDD), and implementing CI/CD pipelines using tools like Azure DevOps pipeline or AWS CodePipeline. - Exposure to data modelling tools like Erwin or ER/Studio to support DevOps in metadata and schema management. - Exposure to leading reporting and visualization tools such as Power BI, particularly in automating report deployment and integration workflows. - Experience with API integrations and supporting infrastructure-as-code for connecting various systems and services Job Types: Full-time, Permanent Pay: ₹1,300,000.00 - ₹1,600,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person Application Deadline: 01/07/2025 Expected Start Date: 01/07/2025

Posted 2 days ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

About Lingaro: Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data. Since 2008, Lingaro has been recognized by clients and global research and advisory firms for innovation, technology excellence, and the consistent delivery of highest-quality data services. Our commitment to data excellence has created an environment that attracts the brightest global data talent to our team. Duties: Designing and implementing data processing systems using distributed frameworks like Hadoop, Spark, Snowflake, Airflow, or other similar technologies. This involves writing efficient and scalable code to process, transform, and clean large volumes of structured and unstructured data. Building data pipelines to ingest data from various sources such as databases, APIs, or streaming platforms. Integrating and transforming data to ensure its compatibility with the target data model or format. Designing and optimizing data storage architectures, including data lakes, data warehouses, or distributed file systems. Implementing techniques like partitioning, compression, or indexing to optimize data storage and retrieval. Identifying and resolving bottlenecks, tuning queries, and implementing caching strategies to enhance data retrieval speed and overall system efficiency. Designing and implementing data models that support efficient data storage, retrieval, and analysis. Collaborating with data scientists and analysts to understand their requirements and provide them with well-structured and optimized data for analysis and modeling purposes. Utilizing frameworks like Hadoop or Spark to perform distributed computing tasks, such as parallel processing, distributed data processing, or machine learning algorithms Implementing security measures to protect sensitive data and ensuring compliance with data privacy regulations. Establishing data governance practices to maintain data integrity, quality, and consistency. Identifying and resolving issues related to data processing, storage, or infrastructure. Monitoring system performance, identifying anomalies, and conducting root cause analysis to ensure smooth and uninterrupted data operations. Collaborating with cross-functional teams including data scientists, analysts, and business stakeholders to understand their requirements and provide technical solutions. Communicating complex technical concepts to non-technical stakeholders in a clear and concise manner. Independence and responsibility for delivering a solution Ability to work under Agile and Scrum development methodologies Staying updated with emerging technologies, tools, and techniques in the field of big data engineering. Exploring and recommending new technologies to enhance data processing, storage, and analysis capabilities. Train and mentor junior data engineers, providing guidance and knowledge transfer. Requirements: A bachelor's or master's degree in Computer Science, Information Systems, or a related field is typically required. A bachelor's or master's degree in Computer Science, Information Systems, or a related field is typically required. Additional certifications in cloud are advantageous. Minimum of 10+ years of experience in data engineering or a related field. Strong technical skills in data engineering, including proficiency in programming languages such as Python, SQL, Pyspark. Familiarity with Azure cloud platform viz. Azure Databricks, Data Factory, Data Lake etc., and experience in implementing data solutions in a cloud environment. Expertise in working with various data tools and technologies, such as ETL frameworks, data pipelines, and data warehousing solutions. In-depth knowledge of data management principles and best practices, including data governance, data quality, and data integration. Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues. Knowledge of data security and privacy regulations, and the ability to ensure compliance within data engineering projects. Excellent communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams, stakeholders, and senior management. Continuous learning mindset, staying updated with the latest advancements and trends in data engineering and related technologies. Consulting exposure, with external customer focus mindset is preferred. Why join us: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment.

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Azure Data Engineer – Databricks & Data Fabric Experience: 5+ years in Data Engineering, 3+ years on Azure stack Job Overview: We are seeking an experienced Azure Data Engineer with strong hands-on expertise in Azure Databricks and Azure Data Fabric to support data transformation, integration, and analytics initiatives in a scalable cloud environment. The ideal candidate will be responsible for building modern data pipelines, optimizing data flows, and enabling robust data governance and observability using Microsoft’s data platform services. Key Responsibilities: Design, build, and manage ETL/ELT pipelines using Azure Data Factory and Azure Databricks Implement and manage data flows across Azure Data Fabric , ensuring consistency and discoverability Work with structured and unstructured data across on-prem and cloud sources Collaborate with data architects, analysts, and business users to understand requirements and deliver high-quality data solutions Optimize and scale data pipelines using PySpark and Delta Lake Monitor, troubleshoot, and improve data pipeline performance using Azure Monitor and Log Analytics Ensure data security, governance, and compliance practices across all data flows Contribute to architectural decisions and cloud modernization strategy Required Skills & Experience: Strong experience with Azure Databricks , PySpark , and Delta Lake Hands-on with Azure Data Factory (ADF) and Dataflows Solid understanding of Azure Data Fabric concepts and implementation Experience working with Azure Synapse Analytics , ADLS Gen2 , and Azure SQL Proficiency in SQL and Python Knowledge of DevOps practices and CI/CD using Azure DevOps or GitHub Familiarity with data governance , metadata management , and cataloging Good understanding of data modeling , data security , and compliance (GDPR, HIPAA) Nice-to-Have: Experience with Power BI , Azure Purview , or Microsoft Fabric Exposure to Snowflake or other modern cloud data platforms Azure certification: DP-203 – Azure Data Engineer Associate or equivalent

Posted 2 days ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Azure Data Engineer Level: Lead Consultant Location: Remote About Lingaro: Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data. Since 2008, Lingaro has been recognized by clients and global research and advisory firms for innovation, technology excellence, and the consistent delivery of highest-quality data services. Our commitment to data excellence has created an environment that attracts the brightest global data talent to our team. ​ Job Responsibilities: Provide leadership and guidance to the data engineering team, including mentoring, coaching, and fostering a collaborative work environment. Set clear goals, assign tasks, and manage resources to ensure successful project delivery. Work closely with developers to support them and improve data engineering processes. Support team members with troubleshooting and resolving complex technical issues and challenges. Provide technical expertise and direction in data engineering, guiding the team in selecting appropriate tools, technologies, and methodologies. Stay updated with the latest advancements in data engineering and ensure the team follows best practices and industry standards. Collaborate with stakeholders to understand project requirements, define scope, and create project plans. Support project managers to ensure that projects are executed effectively, meeting timelines, budgets, and quality standards. Monitor progress, identify risks, and implement mitigation strategies. Act as a trusted advisor for the customer. Oversee the design and architecture of data solutions, collaborating with data architects and other stakeholders. Ensure data solutions are scalable, efficient, and aligned with business requirements. Provide guidance in areas such as data modeling, database design, and data integration. Align coding standards, conduct code reviews to ensure proper code quality level. Identify and introduce quality assurance processes for data pipelines and workflows. Optimize data processing and storage for performance, efficiency and cost savings. Evaluate and implement new technologies to improve data engineering processes on various aspects (CICD, Quality Assurance, Coding standards). Act as main point of contact to other teams/contributors engaged in the project. Maintain technical documentation of the project, control validity and perform regular reviews of it. Ensure compliance with security standards and regulations. Requirements: A bachelor's or master's degree in Computer Science, Information Systems, or a related field is typically required. Additional certifications in cloud are advantageous. Minimum of 7+ years of experience in data engineering or a related field. Strong technical skills in data engineering, including proficiency in programming languages such as Python, SQL, Pyspark. Experience in profiling tool such as Pandas. Familiarity with Azure cloud platform viz. Azure Databricks, Data Factory, Data Lake, Azure SQL etc., and experience in implementing data solutions in a cloud environment. Expertise in working with various data tools and technologies, such as ETL frameworks, data pipelines, and data warehousing solutions. Proven experience in leading and managing a team of data engineers, providing guidance, mentorship, and technical support. In-depth knowledge of data management principles and best practices, including data governance, data quality, and data integration. Strong project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines. Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues. Knowledge of data security and privacy regulations, and the ability to ensure compliance within data engineering projects. Excellent communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams, stakeholders, and senior management. Continuous learning mindset, staying updated with the latest advancements and trends in data engineering and related technologies. Why join us: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment.

Posted 2 days ago

Apply

4.0 years

15 - 20 Lacs

India

On-site

GlassDoor logo

Job Summary: We are looking for an experienced Backend Developer to join our dynamic team and contribute to the development of a cutting-edge application that combines AI-driven insights , multi-agent mechanisms , and real-time data analysis . The application leverages LangGraph for a multi-agent system that interacts with AWS Bedrock to provide real-time responses to user queries from a corpus of PDFs, Excel documents, MongoDB, and Databricks tables . The system also generates actionable recommendations and automated reports to ensure compliance with the ESG framework . The ideal candidate will have hands-on experience with AI/ML models , cloud technologies (primarily AWS), and an understanding of multi-tenancy and compliance requirements . You should be passionate about building robust, scalable backend systems that can process and analyze large datasets efficiently. Key Responsibilities: LangGraph Integration : Develop and maintain the multi-agent mechanism using LangGraph for efficient interaction between agents and integration with AWS Bedrock to process and query datasets. AI and Query Handling : Implement AI models and natural language processing (NLP) systems that quickly and accurately respond to user queries within 60 seconds using data from PDFs, Excel documents, MongoDB , and Databricks tables . Data Ingestion & Storage : Work with the backend data pipeline , ensuring seamless data extraction, transformation, and loading (ETL) into MongoDB and Databricks for dynamic analysis. Risk Analysis & Recommendations : Design and implement risk analysis algorithms that evaluate ESG-related data and generate recommendations on actions to improve the score based on an ESG compliance framework. Real-time Data Processing : Ensure that the system processes queries in real-time and can handle large-scale datasets efficiently. Automated Report Generation : Build automated reporting mechanisms that compile analysis and recommendations into comprehensive reports for the users. API Design and Documentation : Design RESTful APIs for the application, ensuring smooth integration between the backend, frontend, and other services (e.g., Databricks, MongoDB). Scalability and Performance : Focus on optimizing backend processes for speed , scalability , and fault tolerance , especially for queries against large datasets. Compliance : Ensure backend code and architecture meet ISO 27001, SOC2, GDPR , and other relevant compliance standards for security and data privacy. Required Skills & Qualifications: Programming Languages : Expertise in Python (for backend development) and familiarity with JavaScript (for working in full-stack environments). LangGraph and Multi-Agent Systems : Experience with LangGraph or related frameworks for building multi-agent mechanisms . AI/ML : Knowledge of machine learning models , NLP , and cloud-based AI services (especially AWS Bedrock ). Databases : Experience with MongoDB for document storage and Databricks for handling large-scale analytics. Cloud Technologies : Strong working knowledge of AWS , including services like Lambda , S3 , IAM , API Gateway , ECS , and Bedrock . API Development : Strong experience with designing, developing, and documenting RESTful APIs (using FastAPI , Flask , or similar). Real-Time Data Processing : Experience in building applications that handle real-time data processing at scale. Risk Analysis & ESG : Familiarity with ESG frameworks , risk management , and recommendations for improving compliance scores. Automated Reporting : Experience building automated reports or dashboards based on data analysis. Security & Compliance : Knowledge of ISO 27001 , SOC2 , GDPR , and other compliance regulations in building secure, privacy-focused applications. Version Control : Proficiency with Git for version control and collaborating on codebase. Preferred Skills: AI Services Integration : Experience with AWS Bedrock , Amazon Comprehend , or similar AI-driven services to automate data processing and analysis. Containerization : Familiarity with Docker , Kubernetes , or container orchestration for scalable deployments. DevOps : Experience with CI/CD pipelines , Terraform , and AWS CloudFormation for infrastructure as code. Testing : Strong experience in unit testing and integration testing of backend services, including the use of tools like pytest . Qualifications: Bachelor’s or Master’s degree in Computer Science , Engineering , or a related field. 4+ years of backend development experience with a focus on AI-driven applications and cloud architecture . Experience in multi-agent systems and AI/ML models is highly desirable. Proven track record of building high-performance, scalable applications in a cloud-based environment . Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Morning shift Work Location: In person Speak with the employer +91 8467947766 Expected Start Date: 30/06/2025

Posted 2 days ago

Apply

0 years

7 - 9 Lacs

Calcutta

On-site

GlassDoor logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: We are seeking a Data Engineer to design, develop, and maintain data ingestion processes to a data platform built using Microsoft Technologies, ensuring data quality and integrity. The role involves collaborating with data architects and business analysts to implement solutions using tools like ADF, Azure Databricks, and requires strong SQL skills. Responsibilities: Key responsibilities include developing, testing, and optimizing ETL workflows and maintaining documentation. ETL development experience in Microsoft data track are required. Work with business team to translate the business requirement to technical requirements. Demonstrated expertise in Agile methodologies, including Scrum, Kanban, or SAFe. Mandatory skill sets: · Strong proficiency in Azure Databricks, including Spark and Delta Lake. · Experience with Azure Data Factory, Azure Data Lake Storage, and Azure SQL Database. · Proficiency in data integration and ETL processes and T-SQL. · Experienced working in Python for data engineering · Experienced working in Postgres Database · Experienced working in graph database · Experienced in architecture design and data modelling Good To Have Skill Sets: · Unity Catalog / Purview · Familiarity with Fabric/Snowflake service offerings · Visualization tool – PowerBI Preferred skill sets: Hands on knowledge of python, Pyspark and strong SQL knowledge. ETL and data warehousing is must. Relevant certifications (Any one) (e.g., Databricks Data Engineer Associate Microsoft Certified: Azure Data Engineer Associate Azure Solution Architect) are mandatory Years of experience required: 5+yrs Education qualification: Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 2 days ago

Apply

3.0 years

7 - 9 Lacs

Calcutta

On-site

GlassDoor logo

Line of Service Advisory Industry/Sector FS X-Sector Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Senior Associate Exp : 3 - 6 Years Location: Kolkata Technical Skills: · Strong expertise in Azure Databricks, Azure Data Factory (ADF), PySpark, SQL Server, and Python. · Solid understanding of Azure Functions and their application in data processing workflows. · Understanding of DevOps practices and CI/CD pipelines for data solutions. · Experience with other ETL tools such as Informatica Intelligent Cloud Services (IICS) is a plus. · Strong problem-solving skills and ability to work independently and collaboratively in a fast-paced environment. · Excellent communication skills to effectively convey technical concepts to non-technical stakeholders. Key Responsibilities: · Develop, maintain, and optimize scalable data pipelines using Azure Databricks, Azure Data Factory (ADF), and PySpark. · Collaborate with data architects and business stakeholders to translate requirements into technical solutions. · Implement and manage data integration processes using SQL Server and Python. · Design and deploy Azure Functions to support data processing workflows. · Monitor and troubleshoot data pipeline performance and reliability issues. · Ensure data quality, security, and compliance with industry standards and best practices. · Document technical specifications and maintain clear and concise project documentation. Mandatory skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Preferred skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Years of experience required: 3-6 Years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 2 days ago

Apply

3.0 years

7 - 9 Lacs

Calcutta

On-site

GlassDoor logo

Line of Service Advisory Industry/Sector FS X-Sector Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary – Senior Associate – Azure Data Engineer Responsibilities: Role : Senior Associate Exp : 3 - 6 Years Location: Kolkata Technical Skills: · Strong expertise in Azure Databricks, Azure Data Factory (ADF), PySpark, SQL Server, and Python. · Solid understanding of Azure Functions and their application in data processing workflows. · Understanding of DevOps practices and CI/CD pipelines for data solutions. · Experience with other ETL tools such as Informatica Intelligent Cloud Services (IICS) is a plus. · Strong problem-solving skills and ability to work independently and collaboratively in a fast-paced environment. · Excellent communication skills to effectively convey technical concepts to non-technical stakeholders. Key Responsibilities: · Develop, maintain, and optimize scalable data pipelines using Azure Databricks, Azure Data Factory (ADF), and PySpark. · Collaborate with data architects and business stakeholders to translate requirements into technical solutions. · Implement and manage data integration processes using SQL Server and Python. · Design and deploy Azure Functions to support data processing workflows. · Monitor and troubleshoot data pipeline performance and reliability issues. · Ensure data quality, security, and compliance with industry standards and best practices. · Document technical specifications and maintain clear and concise project documentation. Mandatory skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Preferred skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Years of experience required: 3-6 Years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure, PySpark Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 2 days ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

Role: Data Integration Engineer Location: India US Shift: 7:30 PM to 3:30 AM IST About Lingaro : Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data. Since 2008, Lingaro has been recognized by clients and global research and advisory firms for innovation, technology excellence, and the consistent delivery of highest-quality data services. Our commitment to data excellence has created an environment that attracts the brightest global data talent to our team. ​About Data Engineering : Data engineering involves the development of solutions for the collection, transformation, storage and management of data to support data-driven decision making and enable efficient data analysis by end users. It focuses on the technical aspects of data processing, integration, and delivery to ensure that data is accurate, reliable, and accessible in a timely manner. It also focuses on the scalability, cost-effectiveness, security, and supportability of the solution. Data engineering encompasses multiple toolsets and architectural concepts across on-premises and cloud stacks, including but not limited to data warehousing, data lakes, lake house, data mesh, and includes extraction, ingestion, and synchronization of structured and unstructured data across the data ecosystem. It also includes processing organization and orchestration, as well as performance optimization of data processin g. Requirement: A bachelor's or master’s degree in computer science, Information Systems, or a related field is typically required. Additional certifications in cloud are advantageous. Minimum of 4 years of experience in data engineering or a related field. Must have: ETL Tools, including Azure Data Factory, Azure Databricks, Data Lake - implementing data ingestion pipelines from multiple data sourcesDatabricks/Spark development. Very good knowledge of cloud data services, data warehousing, big data technologies, and data lakes. Especially Azure, DataBricksSQL - designing, building, and managing SQL Server databases in the Azure cloud Programming skills for data analysis (especially PySpark, SparkSQL, Python, SQL).Understanding of data visualization tools (e.g., Power BI). Task: Provide technical expertise and direction in data engineering, guiding the team in selecting appropriate tools, technologies, and methodologies. Stay updated with the latest advancements in data engineering and ensure the team follows best practices and industry standards. Collaborate with stakeholders to understand project requirements, define scope, and create project plans. Support project managers to ensure that projects are executed effectively, meeting timelines, budgets, and quality standards. Monitor progress, identify risks, and implement mitigation strategies. Align coding standards, conduct code reviews to ensure proper code quality level. Identify and introduce quality assurance processes for data pipelines and workflows. Optimize data processing and storage for performance, efficiency and cost savings. Evaluate and implement new technologies to improve data engineering processes on various aspects (CICD, Quality Assurance, Coding standards). Maintain technical documentation of the project, control validity and perform regular reviews of it. Ensure compliance with security standards and regulations. Why join us: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment.

Posted 2 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

#Hiring hashtag #DataEngineer Years of Experience- 8+ Years Location- Pune and Bangalore Position- Lead and Architecture Mandatory Skillset hashtag #AWS , hashtag #Databricks , hashtag #Spark , hashtag #Pyspark , hashtag #Glue , hashtag #Lambda , hashtag #RAG , hashtag #Snowflake hashtag #Airflow Budget experience x 2.5 LPA Required Skills: * Proficiency in multiple programming languages - ideally Python * Proficiency in at least one cluster computing frameworks (preferably Spark, alternatively Flink or Storm) * Proficiency in at least one cloud data lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar) * Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar) * Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,) * Strong organizational, problem-solving and critical thinking skills; Strong documentation skills Preferred skills: * Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation) Interested candidates please share your resume at talentacquisition.aaf@byldgroup.com

Posted 2 days ago

Apply

15.0 years

0 Lacs

Indore

On-site

GlassDoor logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education

Posted 2 days ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role: Azure Data Engineer Experience: 5-10 years Notice Period: Immediate to 15 days Location: Hyderabad We are seeking a highly skilled Data Engineer to join our dynamic team. Job Description Mandate Skills: Databricks, BI and ADP Proficient in Azure Data Platform (Storage, ADF, Databricks, Devops). Strong SQL skills, Data model design (MSSQL, DatabricksSQL) Experience with Azure SQL Database, Azure Cosmos DB, and Azure Blob Storage. Expertise in designing and implementing ETL processes using SSIS, Python, or PowerShell. Fabric/Power BI (full lifecycle of models/reports design, test, deployment, performance optimization/monitoring) Familiarity with data modeling principles and techniques. Excellent understanding of data security and compliance regulations. Proficiency in Azure DevOps for continuous integration and deployment. Ability to work in a fast-paced, collaborative environment. Regards, ValueLabs

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground-breaking impact for a wide network of clients. Our culture of evolution and empathy centres on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As a Quality Engineer/tester work with client and IBM stakeholders to identify various business controls for client and work with respective IBM and client to ensure successful implementation of controls identified. Your primary responsibilities include: Software Tester at AT&T for BGW IOT - Microservices and Databricks; Production Support Analyze test specifications and convert them into Manual/Auto Test Cases. Identify the initial setup, input data, appropriate steps, and the expected response in the manual test cases. Conduct sanity testing of the application based on user requirements. Involve in writing test cases based on the user stories. Wrote positive, negative, regression test cases in Excel and import them into JIRA. Documented the bugs in iTRACK and assign them to the relevant developers. Upon fix, retest the bug. Assist end users with test cases to effectively perform UAT testing. Preparation of a test plan in accordance with the business requirements. Develop and Executed Manual & Auto test cases for Functional testing of various models of the application. Participate in Daily Agile Scrum Standup meetings, Program Increment (PI) planning and Retrospective meetings Preferred Education Master's Degree Required Technical And Professional Expertise Hands-on experience on API testing using POSTMAN/SOAP-UI, ELT testing with Databricks and MySQL Expertise in Test management, data management, defect management and hands-on exp on Agile methodology including tools like JIRA, ADO, iTrack Sound knowledge on Telecom Domain and billing system. Hands-on experiences on handling production support especially the reporting system. Experience on Kubernetes for validation and analysis of logs/data Preferred Technical And Professional Experience Automation testing using TOSCA. Experience on writing complex SQL queries. Support, coordination, and responsibility of the testing activities across shores

Posted 2 days ago

Apply

8.0 - 13.0 years

25 - 35 Lacs

Hyderabad

Work from Office

Naukri logo

Passionate engineer who is interested to work in an early stage startup building innovative cloud-native data protection solutions. Must be comfortable learning and scoping business requirements and come up with solutions. Great opportunity for engineers who are willing to get out of their comfort zone and solve hugely impacting modern data security concerns. Desired background and experience: Must be a problem solver Minimum of 8 years of experience building products using Java and/or Python. Prior work in startups is a huge plus Comfortable with rapid prototyping and implementation of product in an agile team environment Data engineering experience with implementation of Data ingestion, ETL, data pipeline/workflows Experience implementing Database views and UDFs to control access to data using row and column filters Experience in databases and warehouses. Working experience in one or more of the following areas : AWS RDS, AWS Data lake, Redshift, Athena, Databricks and Snowflake. Working experience of implementing data solutions using Spark Ability to write and tune complex SQL queries

Posted 2 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Improzo At Improzo ( Improve + Zoe; meaning Life in Greek ), we believe in improving life by empowering our customers. Founded by seasoned Industry leaders, we are laser focused on delivering quality-led commercial analytical solutions to our clients. Our dedicated team of experts in commercial data, technology, and operations has been evolving and learning together since our inception. Here, you won't find yourself confined to a cubicle; instead, you'll be navigating open waters, collaborating with brilliant minds to shape the future. You will work with leading Life Sciences clients, seasoned leaders and carefully chosen peers like you! People are at the heart of our success, so we have defined our CARE values framework with a lot of effort, and we use it as our guiding light in everything we do. We CARE! Customer-Centric: Client success is our success. Prioritize customer needs and outcomes in every action. Adaptive: Agile and Innovative, with a growth mindset. Pursue bold and disruptive avenues that push the boundaries of possibilities. Respect: Deep respect for our clients & colleagues. Foster a culture of collaboration and act with onesty, transparency, and ethical responsibility. Execution: Laser focused on quality-led execution; we deliver! Strive for the highest quality in our services, solutions, and customer experiences. About The Role As the Practice Lead – Digital & Technology , you'll be a key driver in shaping and significantly expanding our offerings within the dynamic pharma commercial sector . You'll lead complex technology programs, pioneer new service lines, build and mentor a top-tier team, and act as a trusted advisor to leading pharmaceutical clients. Your expertise will guide them through critical initiatives in digital strategy, commercial analytics, and emerging technologies. You'll bring over 10+ years of experience delivering large-scale tech programs in Life Sciences, with a strong emphasis on commercial functions. Your background should also showcase a proven ability to lead business transformations and scale practices into highly profitable business units. The compensation will be a combination of Base + Bonus + Equity. Key Responsibilities Practice Strategy & Vision for Pharma Commercial Define and rigorously execute the strategic roadmap for the Digital & Technology Practice, with an unwavering focus on high-value pharma commercial solutions. Proactively identify and assess emerging trends and disruptive technologies in pharma commercial tech (e.g., AI/ML-driven predictive analytics, omnichannel orchestration, next-gen commercial data platforms, real-world evidence integration) and cloud technologies like AWS (e.g., HealthLake, S3 for secure storage, Glue for ETL) and Azure (e.g., Health Data Services, Data Lake, Synapse Analytics). Translate these trends into compelling, actionable offerings, working seamlessly with sales teams and senior client stakeholders. Business Development & Thought Leadership Drive and actively support sales pursuits by shaping winning proposals, responding to RFPs/RFIs, developing comprehensive Statements of Work (SOWs) that often include robust commercial data strategies and cloud architecture designs. Lead strategic client conversations focused on commercial value and co-create innovative solutions for specific pharma commercial challenges. Represent the firm at key pharmaceutical industry events, author influential thought leadership content on pharma commercial trends and digital strategies, including the strategic use of commercial data warehouses and cloud adoption (AWS, Azure, GCP). Collaborate closely with sales and marketing teams to develop targeted go-to-market strategies and campaigns. Practice Management & Operations Establish and manage the practice’s operating model, including governance, agile delivery methodologies, and comprehensive knowledge management, optimized for pharma commercial projects, including data governance for commercial data assets. Define KPIs and performance metrics; rigorously track utilization, revenue growth, and profitability. Oversee strategic resource planning, staffing, and capacity management for scalable and efficient delivery of pharma commercial engagements. Foster a culture of continuous learning, innovation, and cross-functional collaboration, encouraging expertise in pharma commercial domains, including commercial data architecture and analytics. Team Building & Leadership Hire, mentor, and lead a high-performing team of consultants, architects, and analysts with deep expertise in life sciences and pharma commercial technologies, including specialists in data warehousing, BI, and cloud solutions. Create comprehensive career development plans and lead internal capability-building initiatives focused on market trends, new technologies, and commercial best practices relevant to the pharmaceutical industry. Technology Program Delivery & Commercial Impact Lead end-to-end delivery of all technical programs/projects for pharma clients, taking full ownership from strategy to execution. Collaborate with internal practices, client teams, and senior client stakeholders to ensure solutions address commercial pain points and deliver measurable impact (e.g., enhanced sales force effectiveness, optimized patient engagement). Serve as the primary escalation point, leveraging experience to keep projects on track and ensure exceptional client satisfaction. Commercial Offering Development Design and package consulting services around critical commercial functions: CRM optimization (Veeva, Salesforce Health Cloud), Master Data Management (MDM) for commercial data (HCP, HCO, product), advanced commercial data platforms, field force effectiveness, digital engagement strategies, and patient support programs. Lead the design, architecture, and implementation of robust commercial data warehouses and data lakes, leveraging cloud-native services like AWS Redshift/Snowflake on AWS, or Azure Synapse Analytics/Databricks on Azure. These will serve as the single source of truth for critical commercial insights (e.g., sales data, patient data, claims data, syndicated data). Develop robust accelerators, frameworks, and reusable assets tailored for the pharma commercial lifecycle, enabling rapid deployment and demonstrable value. Client Engagement & Delivery Oversight Act as a senior advisor to pharmaceutical clients, guiding them through complex digital transformation journeys impacting their commercial models, with a strong focus on leveraging commercial data assets and cloud-based solutions. Provide strategic oversight on key engagements to ensure delivery excellence, client satisfaction, and achievement of defined commercial outcomes. Resolve escalations and ensure alignment between client expectations and delivery outcomes, especially in high-stakes commercial projects involving data integration, analytics, and cloud adoption. Compliance & Regulatory Acumen Ensure all digital and technology solutions, including commercial data warehouses, Intelligent RPA implementations, and cloud deployments on AWS and Azure, adhere to relevant pharmaceutical industry regulations and compliance standards (e.g., FDA, HIPAA, GDPR, GxP, global/local promotional guidelines). Minimize risk and uphold ethical commercial practices, especially regarding patient privacy and data security in cloud environments. Others Strong understanding of the life sciences and bio-pharma industry is highly desirable. Comfortable with commercial dataset: Sales data from Iqvia, Symphony, Komodo, etc., CRM data from Veeva, OCE, etc. Excellent communication, presentation, and interpersonal skills. Excellent problem-solving, analytical, and decision-making skills. Attention to details, biased for quality and client centricity. Ability to work independently and as part of a cross-functional team. Strong leadership, mentoring, and coaching skills. Benefits Competitive salary and benefits package, including stock options Opportunity to work on cutting-edge tech projects, transforming the life sciences industry Collaborative and supportive work environment. Opportunities for professional development and growth Skills: cloud,practice management,digital strategy,mentoring,commercial data platforms,commercial analytics,analytics,omnichannel orchestration,crm optimization (veeva, salesforce health cloud),emerging technologies,cloud technologies (aws, azure),business transformation,pharma commercial,data,team building,master data management (mdm),life sciences,digital,data warehousing,ai/ml-driven predictive analytics,data governance,project management,technology

Posted 2 days ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

We are seeking an experienced Azure Data Engineer with 36 years of experience for a 6-month remote contract. The candidate will be responsible for developing and supporting IT solutions using technologies like Azure Data Factory, Azure Databricks, Azure Synapse, Python, PySpark, Teradata, and Snowflake. The role involves designing ETL pipelines, developing Databricks notebooks, handling CI/CD pipelines via Azure DevOps, and working on data warehouse modeling and integration. Strong skills in SQL, data lake storage, and deployment/monitoring are required. Prior experience in Power BI and DP-203 certification is a plus. Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About the Company We are seeking a highly skilled Senior AI/ML Engineer to join our team in a hybrid (offshore) role. The ideal candidate will have a strong background in machine learning engineering, particularly with the Databricks ML stack, Python, and ML Ops. This role focuses on developing, deploying, and maintaining high-performance machine learning models. Experience with Azure Cognitive Services, Azure OpenAI, and AI governance will be an added advantage. About the Role The Senior AI/ML Engineer will be responsible for various aspects of machine learning development, ML Ops, model performance, Azure integration, AI governance, and collaboration with cross-functional teams. Responsibilities Machine Learning Development: Design, develop, and deploy machine learning models using the Databricks ML stack. Write efficient and scalable code in Python for model training and evaluation. Implement feature engineering, data preprocessing, and model tuning techniques to enhance model performance. ML Ops: Establish and manage ML Ops pipelines for continuous integration and continuous deployment (CI/CD) of machine learning models. Automate model training, testing, and deployment processes to ensure robust and reliable model performance. Monitor and maintain deployed models to ensure they meet performance and reliability standards. Model Performance: Conduct rigorous testing and validation of machine learning models to ensure accuracy, efficiency, and scalability. Utilize performance metrics to assess model effectiveness and make data-driven improvements. Collaborate with data scientists and engineers to troubleshoot and resolve model performance issues. Azure Integration: Integrate machine learning models with Azure services, leveraging tools like Azure Cognitive Services and Azure OpenAI. Utilize Azure resources for model training, deployment, and management. Explore and implement AI solutions using Azure's advanced capabilities. AI Governance: Ensure compliance with AI governance policies and best practices. Implement ethical AI practices and contribute to the development of governance frameworks. Monitor and address potential biases and ethical concerns in machine learning models. Collaboration and Coordination: Work closely with cross-functional teams, including data scientists, engineers, and business stakeholders. Coordinate with offshore teams to ensure seamless collaboration and project alignment. Participate in team meetings, providing insights and updates on project progress. Required Skills Strong background in machine learning engineering, particularly with the Databricks ML stack, Python, and ML Ops.,Rag,GenAI,Chatbot

Posted 2 days ago

Apply

6.0 - 9.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

We are seeking a talented individual to join our Data Engineering team at Marsh Mc Lennan. This role will be based in Mumbai/Pune/Gurgaon. This is a hybrid role that has a requirement of working at least three days a week in the office. We will count on you to: Solution Architecture: Lead the design and architecture of data engineering solutions that meet complex business requirements, ensuring scalability, reliability, and performance. Data Pipeline Development: Oversee the development and maintenance of robust data pipelines and architectures to facilitate data ingestion, transformation, and storage from various sources. Cloud Technologies Expertise: Utilize cloud data engineering tools such as Azure Data Factory, Databricks, or Amazon data engineering tools to implement and optimize data solutions. Data Integration and Management: Integrate and manage data from diverse sources, ensuring seamless data flow and accessibility for analytics and reporting purposes. Data Quality Assurance: Establish and enforce data quality standards and validation processes to ensure the accuracy, consistency, and reliability of data across the organization. Performance Optimization: Monitor, troubleshoot, and optimize data pipelines for performance, scalability, and cost-effectiveness, making adjustments as necessary to improve efficiency. Collaboration and Leadership: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver effective solutions. Provide mentorship and guidance to junior team members. Documentation and Best Practices: Create and maintain comprehensive documentation for data architectures, pipelines, and processes, promoting best practices and knowledge sharing within the team. Technical Skills: Utilize SQL for data manipulation and retrieval, and apply programming skills in languages such as Python or Scala for data processing tasks. Continuous Improvement: Stay abreast of industry trends and emerging technologies in data engineering, proactively seeking opportunities to enhance existing processes, tools, and methodologies. What you need to have: Bachelor’s or master’s degree in computer science, Information Technology, Data Engineering, or a related field. 6-9 years of experience in data engineering, Database, ETL or data management related role, with a focus on solutioning and architecture. Proven expertise in cloud data engineering tools such as Azure Data Factory, Databricks, or AWS data engineering tools. Strong proficiency in SQL, Python and experience with ETL processes and tools. Familiarity with data warehousing concepts and technologies, as well as big data frameworks. Proficiency in extracting data from multiple data sources – Web, PDF, Excel or any database with broad working knowledge of methodologies used for analytics is required. What makes you stand out? Degree or Certification in Data Engineering (AWS, Databricks) (would be preferred) Experience in Healthcare/Insurance sector, working with Multination clients. Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman . With annual revenue of $23 billion and more than 85,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com , or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person.

Posted 2 days ago

Apply

5.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job description: Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 days ago

Apply

7.0 years

0 Lacs

India

On-site

Linkedin logo

We are seeking an experienced Databricks Administrator with 7+ years of expertise in managing and optimizing Databricks environments. The ideal candidate should have hands-on experience with Azure/AWS Databricks, cluster management, security configurations, and performance optimization. This role requires close collaboration with data engineering and analytics teams to ensure smooth operations and scalability. About the Role The Databricks Administrator will be responsible for managing and optimizing Databricks environments, ensuring performance, security, and scalability. Responsibilities Deploy, configure, and manage Databricks workspaces, clusters, and jobs. Monitor and optimize Databricks performance, auto-scaling, and cost management. Implement security best practices, including role-based access control (RBAC) and encryption. Manage Databricks integration with cloud storage (Azure Data Lake, S3, etc.) and other data services. Automate infrastructure provisioning and management using Terraform, ARM templates, or CloudFormation. Troubleshoot Databricks runtime issues, job failures, and performance bottlenecks. Support CI/CD pipelines for Databricks workloads and notebooks. Collaborate with data engineering teams to enhance ETL pipelines and data processing workflows. Ensure compliance with data governance policies and regulatory requirements. Maintain and upgrade Databricks versions and libraries as needed. Qualifications 7+ years of experience as a Databricks Administrator or in a similar role. Strong knowledge of Azure/AWS Databricks and cloud computing platforms. Hands-on experience with Databricks clusters, notebooks, libraries, and job scheduling. Expertise in Spark optimization, data caching, and performance tuning. Proficiency in Python, Scala, or SQL for data processing. Experience with Terraform, ARM templates, or CloudFormation for infrastructure automation. Familiarity with Git, DevOps, and CI/CD pipelines. Strong problem-solving skills and ability to troubleshoot Databricks-related issues. Excellent communication and stakeholder management skills. Preferred Skills Databricks certifications (e.g., Databricks Certified Associate/Professional). Experience in Delta Lake, Unity Catalog, and MLflow. Knowledge of Kubernetes, Docker, and containerized workloads. Experience with big data ecosystems (Hadoop, Apache Airflow, Kafka, etc.). Shift Timing: 10PM to 6AM IST (Fixed)

Posted 2 days ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed and implemented effectively to support the organization's needs. This role requires expertise in the Databricks Unified Data Analytics Platform and a strong understanding of application development principles. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Develop and maintain applications based on business requirements. - Perform code reviews and ensure adherence to coding standards. - Collaborate with cross-functional teams to gather requirements and design solutions. - Troubleshoot and debug application issues. - Create technical documentation for reference and reporting purposes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of application development principles. - Experience in designing and building applications to meet business requirements. - Knowledge of programming languages such as Java, Python, or Scala. - Familiarity with cloud platforms such as AWS or Azure. - Experience with database technologies and SQL. - Good To Have Skills: Experience with data engineering and ETL processes. - Knowledge of containerization technologies such as Docker or Kubernetes. Additional Information: - The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bengaluru office. - A 15 years full time education is required.

Posted 2 days ago

Apply

7.5 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Collibra Data Quality & Observability Good to have skills : Collibra Data Governance Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are functioning optimally. You will also engage in problem-solving activities, providing support and enhancements to existing applications while maintaining a focus on quality and efficiency. Key Responsibilities: Configure and implement Collibra Data Quality (CDQ) rules, workflows, dashboards, and data quality scoring metrics. Collaborate with data stewards, data owners, and business analysts to define data quality KPIs and thresholds. Develop data profiling and rule-based monitoring using CDQ's native rule engine or integrations (e.g., with Informatica, Talend, or BigQuery). Build and maintain Data Quality Dashboards and Issue Management workflows within Collibra. Integrate CDQ with Collibra Data Intelligence Cloud for end-to-end governance visibility. Drive root cause analysis and remediation plans for data quality issues. Support metadata and lineage enrichment to improve data traceability. Document standards, rule logic, and DQ policies in the Collibra Catalog. Conduct user training and promote data quality best practices across teams. Required Skills and Experience: 3+ years of experience in data quality, metadata management, or data governance. Hands-on experience with Collibra Data Quality & Observability (CDQ) platform. Knowledge of Collibra Data Intelligence Cloud including Catalog, Glossary, and Workflow Designer. Proficiency in SQL and understanding of data profiling techniques. Experience integrating CDQ with enterprise data sources (Snowflake, BigQuery, Databricks, etc.). Familiarity with data governance frameworks and data quality dimensions (accuracy, completeness, consistency, etc.). Excellent analytical, problem-solving, and communication skills. Additional Information: - The candidate should have minimum 7.5 years of experience in Collibra Data Quality & Observability. - This position is based in Mumbai. - A 15 years full time education is required.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies