Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Role : Data Engineer Location : Bangalore Experience : 3+ Yrs Employment Type : Full Time, Permanent Working mode : Regular Job Description : Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. Key Responsibilities : A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.
Posted 1 day ago
6.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 2 days ago
6.0 - 10.0 years
9 - 13 Lacs
Mumbai
Work from Office
Job Description : The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 2 days ago
6.0 - 10.0 years
9 - 13 Lacs
Kolkata
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 2 days ago
6.0 - 8.0 years
9 - 13 Lacs
Mumbai
Work from Office
Job Title : Sr.Data Engineer Ontology & Knowledge Graph Specialist. Department : Platform Engineering. Role Description : This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 2 days ago
6.0 - 10.0 years
9 - 13 Lacs
Chennai
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 2 days ago
6.0 - 8.0 years
9 - 13 Lacs
Kolkata
Remote
Role Description : This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 2 days ago
6.0 - 8.0 years
9 - 13 Lacs
Chennai
Work from Office
This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 3 days ago
7.0 - 12.0 years
12 - 16 Lacs
Pune
Hybrid
Job Summary : We are seeking an experienced and dynamic Analytical Team Lead with expertise in Qlik SaaS, Power BI, Data Lake solutions, and AI/ML technologies. The successful candidate will lead a team of data analysts and engineers, driving business intelligence (BI) initiatives, building scalable analytics platforms, and leveraging advanced AI/ML models to uncover actionable insights. This role requires a strong balance of technical expertise, strategic thinking, and leadership skills to align data strategies with organizational goals. Key Responsibilities : 1. Leadership and Team Management : - Lead, mentor, and manage a team of data analysts, BI developers, and data engineers. - Promote collaboration and knowledge sharing within the team to foster a high-performance culture. - Ensure alignment of analytics objectives with business priorities. 2. BI and Data Visualization : - Design, implement, and manage scalable dashboards using Qlik SaaS and Power BI. - Ensure data visualization best practices to deliver clear, actionable insights. - Collaborate with stakeholders to define KPIs and build intuitive reporting solutions. 3. Data Architecture and Integration : - Oversee the design and optimization of Data Lake solutions for efficient data storage and retrieval. - Ensure seamless integration of data pipelines across multiple systems and platforms. - Partner with data engineering teams to maintain data quality, governance, and security. 4. AI/ML Implementation : - Drive the adoption of AI/ML technologies to build predictive models, automate processes, and enhance decision-making. - Collaborate with data scientists to develop and deploy machine learning models within the analytics ecosystem. 5. Agile Project Management : - Lead analytics projects using agile methodologies, ensuring timely delivery and alignment with business goals. - Act as a bridge between technical teams and business stakeholders, facilitating clear communication and expectation management. 6. Stakeholder Engagement : - Work closely with business leaders to identify analytics needs and opportunities. - Provide thought leadership on data-driven strategies and innovation. Key Skills and Qualifications : Technical Expertise : 1. Proficiency in Qlik SaaS and Power BI for BI development and data visualization. 2. Strong understanding of Data Lake architectures and big data technologies (Azure Data Lake, Google big query ). 3. Hands-on experience with AI/ML frameworks and libraries 4. Knowledge of programming languages such as Python, R, or SQL. Leadership and Communication : 1. Proven experience in leading and mentoring teams. 2. Strong stakeholder management and communication skills to translate complex data concepts into business value. Project Management : 1. Experience with agile methodologies in analytics and software development projects. Education : - Bachelor's degree (12 + 4) . Preferred Qualifications : 1. Certification in Qlik, Power BI, or cloud platforms . 2. Experience in deploying analytics solutions in hybrid or cloud environments. 3. Familiarity with DevOps, CI/CD pipelines, and MLOps processes.
Posted 3 days ago
8.0 - 13.0 years
22 - 37 Lacs
Noida, Pune, Bengaluru
Work from Office
Desired Profile - Collect, analyse, and document all business and functional requirements for the Data Lake infrastructure. Support advancements in Business Analytics to ensure the system meets evolving business needs. Profile new and existing data sources to define and refine the data warehouse model. Collaborate with architects and stakeholders to define data workflows and strategies. Drive process improvements to optimize data handling and performance. Perform deep data analysis to ensure accuracy, consistency, and quality of data. Work with QA resources on test planning to ensure quality and consistency within the data lake and Data Warehouse. Gather data governance requirements and ensure implementation of data governance practices within the Data Lake infrastructure. Collaborate with functional users to gather and define metadata for the Data Lake. Key Skills: Azure Data Factory, Synapse, Power BI, Data Lake, SQL, KQL, Azure Security, data integration, Oracle EBS, cloud computing, data visualization, CI/CD pipelines, communication skills Pls share cv at parul@mounttalent.com
Posted 6 days ago
10.0 - 14.0 years
10 - 12 Lacs
Mumbai, Pune, Bengaluru
Work from Office
We are looking for an experienced SAP Functional Consultant with expertise in the Agro-Chemical industry to support development, and support of SAP modules of FI, CO, SD, MM based reporting in SAP BW, Datalake, Visualisation tools. Required Candidate profile Minimum 10+ years of experience in SAP Any graduate or postgraduate degree. Note : Immediate joiner/6 months contract
Posted 1 week ago
6.0 - 8.0 years
6 - 9 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
How You Will Fulfill Your Potential Translate business and technical requirements into we'll-engineered business applications based on object-oriented design and event driven model. Interface with internal client s/end users and other development teams Design, build and maintain high-performance, high-availability, high-capacity platforms for clearing house data. Understand market and client behavior, regulations, front to back business and Operations functions. Work closely with Operations, Engineering teams to define behavior, functionality and expected outcomes for the products under development. Participate in the full SDLC for software development to be written in JAVA, React, DB2, Kafka, micro services, MongoDB, Datalake with event driven/batch architecture. Responsible for production support of global users Work closely and collaboratively with colleagues across regions Skills Experience We Are Looking For Bachelor/masters degree in computer science or related technical field involving programming or systems engineering. 6-8 Yrs of relevant software experience Proficiency in building software in one or more of the following: JAVA, React, DB2, Kafka, micro services, MongoDB, Datalake, Kubernetes, micro service architecture. Good at algorithms, data structures and software design. Systematic problem-solving approach, coupled with a hands-on experience of debugging and optimizing code, as we'll as automation. Experience with UNIX operating systems internals. Experience with object-oriented programming Java design pattern. Should be able to work independently and guide/coach junior team members. Strong interpersonal skills and able to contribute to discussions on design and strategy. Experience with distributed systems design, maintenance, and troubleshooting. Hands-on experience with debugging and optimizing code. Strong communication skills, drive, and ownership. Experience applying DevOps principles to novel problems and systems. Experience with any Scripting language (eg, Python), Test Driven Development and Agile methodologies. Derivative products clearing functional knowledge (but not Mandatory)
Posted 1 week ago
7.0 - 12.0 years
12 - 16 Lacs
Pune
Work from Office
Job Summary : We are seeking an experienced and dynamic Analytical Team Lead with expertise in Qlik SaaS, Power BI, Data Lake solutions, and AI/ML technologies. The successful candidate will lead a team of data analysts and engineers, driving business intelligence (BI) initiatives, building scalable analytics platforms, and leveraging advanced AI/ML models to uncover actionable insights. This role requires a strong balance of technical expertise, strategic thinking, and leadership skills to align data strategies with organizational goals. Key Responsibilities : 1. Leadership and Team Management :- Lead, mentor, and manage a team of data analysts, BI developers, and data engineers.- Promote collaboration and knowledge sharing within the team to foster a high-performance culture.- Ensure alignment of analytics objectives with business priorities. 2. BI and Data Visualization :- Design, implement, and manage scalable dashboards using Qlik SaaS and Power BI.- Ensure data visualization best practices to deliver clear, actionable insights.- Collaborate with stakeholders to define KPIs and build intuitive reporting solutions. 3. Data Architecture and Integration :- Oversee the design and optimization of Data Lake solutions for efficient data storage and retrieval.- Ensure seamless integration of data pipelines across multiple systems and platforms.- Partner with data engineering teams to maintain data quality, governance, and security. 4. AI/ML Implementation :- Drive the adoption of AI/ML technologies to build predictive models, automate processes, and enhance decision-making.- Collaborate with data scientists to develop and deploy machine learning models within the analytics ecosystem. 5. Agile Project Management : - Lead analytics projects using agile methodologies, ensuring timely delivery and alignment with business goals.- Act as a bridge between technical teams and business stakeholders, facilitating clear communication and expectation management.6. Stakeholder Engagement : - Work closely with business leaders to identify analytics needs and opportunities.- Provide thought leadership on data-driven strategies and innovation.Key Skills and Qualifications : Technical Expertise : 1. Proficiency in Qlik SaaS and Power BI for BI development and data visualization. 2. Strong understanding of Data Lake architectures and big data technologies (Azure Data Lake, Google big query ). 3. Hands-on experience with AI/ML frameworks and libraries4. Knowledge of programming languages such as Python, R, or SQL.Leadership and Communication :1. Proven experience in leading and mentoring teams. 2. Strong stakeholder management and communication skills to translate complex data concepts into business value.Project Management :1. Experience with agile methodologies in analytics and software development projects. Education :- Bachelor's degree (12 + 4) .Preferred Qualifications : 1. Certification in Qlik, Power BI, or cloud platforms . 2. Experience in deploying analytics solutions in hybrid or cloud environments. 3. Familiarity with DevOps, CI/CD pipelines, and MLOps processes.
Posted 2 weeks ago
10.0 - 15.0 years
30 - 40 Lacs
Noida, Gurugram
Work from Office
We're hiring for Snowflake Data Architect - With Leading IT Services firm for Noida & Gurgaon. Job Summary: We are seeking a Snowflake Data Architect to design, implement, and optimize scalable data solutions using Databricks and the Azure ecosystem. The ideal candidate will have deep expertise in big data architecture, data engineering, and cloud technologies , enabling them to create robust, high-performance data pipelines and analytics solutions. Key Responsibilities: Design and develop scalable, secure, and high-performance data architectures using Snowflake, Databricks, Delta Lake, and Apache Spark . Architect ETL/ELT data pipelines to process structured and unstructured data efficiently. Implement data governance, security, and compliance frameworks across cloud-based data platforms. Optimize Spark jobs for performance, cost, and reliability. Collaborate with data engineers, analysts, and business teams to understand requirements and design appropriate solutions. Develop data lakehouse architectures leveraging Databricks and ADLS Implement machine learning and AI workflows using Databricks ML and integration with ML frameworks. Define and enforce best practices for data modeling, metadata management, and data quality . Monitor and troubleshoot Databricks clusters, job failures, and performance bottlenecks . Stay updated with the latest Databricks features, Apache Spark advancements, and cloud innovations . Required Qualifications: 10+ years of experience in data architecture, data engineering, or big data platforms . Hands-on experience with Snowflake is mandatory and experience on Databricks (including Delta Lake, Unity Catalog, DBSQL) is great-to-have, as an addition. Will work in Individual Contributor role with expertise in Apache Spark for large-scale data processing. Proficiency in Python, Scala, or SQL for data transformations. Experience with Azure and their data services (e.g., Azure Data Factory, Azure Synapse, Azure, Azure SQL Server ). Knowledge of data lakehouse architectures, data warehousing and ETL processes . Strong understanding of data security, IAM, and compliance best practices . Experience with CI/CD pipelines, Infrastructure as Code (Terraform, ARM templates, CloudFormation) . Familiarity with MLflow, Feature Store, and MLOps concepts is a plus. Strong interpersonal and communication skills If interested, please share your profile at harjeet@beanhr.com
Posted 3 weeks ago
8.0 - 13.0 years
5 - 8 Lacs
Hyderabad
Hybrid
Immediate Openings on # Snowflake _ Pan India_ Contract Experience:8+ Years Skill: Snowflake Notice Period: Immediate Employment Type: Contract Work Mode: WFO/Hybrid Job Description : Snowflake Data Warehouse Lead (India - Lead 8 to 10 yrs exp): Lead the technical design and architecture of Snowflake platforms ensuring alignment with customer requirements, industry best practices, and project objectives. Conduct code reviews, ensure adherence to coding standards and best practices, and drive continuous improvement in code quality and performance Provide technical support, troubleshooting problems, and providing timely resolution for Incidents, Services Requests and Minor Enhancements as required for Snowflake platforms and its related services. Datalake and Storage management Adding, updating, or deleting datasets in Snowflake Monitoring storage usage and handling capacity planning Strong communication and presentation skills
Posted 3 weeks ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Senior data management/integration engineer End to end data management and engineering experience. 10+ years Very strong python programming pyspark dev experience in a large scale data ecosystem Solid foundation on SQL data management Familiarity with datalake/delta lake architectures Requirements analysis familiarity and experience interacting and engaging with business teams Retail industry experience (Fortune 1000) around supply chain and operating with large data sets is a must have.
Posted 3 weeks ago
8.0 - 14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction A Data and AI Technology Sales Engineer role (what we internally call a, Brand Technical Specialist) within IBMs zStack brand means accelerating enterprises success by improving their ability to understand their data. It means providing solutions that enable people across organizations, in multiple roles, the ability to turn data into actionable insights without having to wait for IT. And it means solutioning and selling multi-award winning software deployed on IBM z/LinuxONE platform, and world-class design practices that enables business analysts to ask new questions. The answers to which are literally shaping the future and changing the world. Excellent onboarding and an industry leading learning culture will set you up for positive impact and success, whilst ongoing development will advance your career through an upward trajectory. Our sales environment is collaborative and experiential. Part of a team, youll be surrounded by bright minds and keen co-creators - always willing to help and be helped - as you apply passion to work that will compel our clients to invest in IBMs products and services. Your role and responsibilities Applying excellent communication and empathy, youll act as a trusted strategic advisor to some of the worlds most transformational enterprises and culturally influential brands, as they rely on your expertise and our technology to solve some of their hardest problems. With your focus on the front-end of the solution lifecycle, youll be a master at listening to stakeholders, grasping their business challenges and requirements, and forming more detailed definitions of new architectural structures that will make up their best-fit, value adding solutions. Were committed to success. In this role, your achievements will drive your career, team, and clients to thrive. A typical day may involve: Understanding client needs and aligning them with IBM Z solutions. Creating effective end-to-end architecture using IBM Z. Ensuring architectural viability and conducting assessments. Identifying and resolving project risks and conflicts. Your primary responsibilities will include: - Client Strategy Design: Creating client strategies for Data & AI infrastructure around the IBM z and LinuxONE platform - Solution Definition: Defining IBM Data & AI solutions that constitute functionalities such as Data Integration (ETL), Data Store (DB2, Oracle, MySql) and Data Science (Watson studio, Watson ML) leveraging the strengths of the IBM z and LinuxONE platform - Providing proof of concepts and simplifying complex topics for meeting clients business requirements in the area of data platform modernization and analytics. - Credibility Building: Establishing credibility and trust to facilitate architecture and solution benefits to drive revenue and technical business objectives. Required education Bachelors Degree Required technical and professional expertise Required Professional and Technical Expertise : - Minimum 8-14 years of experience in Data and AI technologies which should include infrastructure for Analytics & Advance Analytics solutions like Datalake, Data Warehouse, Business Analytics, AI, GenAI & Datafabric - Experiential selling including co-creation and hands-on technical sales methods such as: demos, custom demos, Proofs of concept, Minimum Viable Products (MVPs) or other technical proofs Build deep brand (Data & AI) expertise to assist partners to deliver PoX (Custom Demo, PoC, MVP, etc.) to progress opportunities. Identifying partners with skills, expertise, and experience Exceptional interpersonal and communication skills, and an ability to collaborate effectively with Ecosystem Partners, clients and sales professionals. Understanding of areas of Governance Risk and Controls is a bonus. Experience in AI Landscape and technologies at work across Banking/Finance Know how and technical capabilities and working experience on similar Data AI Products like Cloudera, Teradata, Oracle, Informatica, SAS, etc Preferred technical and professional experience - Knowledge of IBM Z and how it fits in the Digital Transformation (training in IBMs Z product will be provided)
Posted 3 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Employment Type : Full Time, Permanent Working mode : Regular Job Description : Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. Key Responsibilities : A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.
Posted 3 weeks ago
5.0 - 8.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Role & responsibilities Should Coordinate with team members, Paris counterparts and work independently. Responsible & Accountable to deliver Functional Specifications, Wireframe docs, RCA, Source to Target Mapping, Test Strategy Doc & any other BA artifacts as demanded by the project delivery Understanding the business requirements, discuss with Business users. Should be able to write mapping documents from User stories. Follow project documentation standards. should have very good knowledge of Hands - on SQL. Analysis the Production data and derive KPI for business users Well verse with Jira use for project work. Preferred candidate profile 5+ years of experience in JAVA / Data based projects (Datawarehouse or Datalake) preferably in Banking Domain Able to performing Gap / Root cause analysis Hands-on Business Analysis skill with experience writing Functional Spec Able to performing Gap / Root cause analysis Should be able to convert the Business use case to Mapping sheet of Source to Target & performing Functional validation Should be able to work independently Should be able to debug prod failures, able to provide root cause solution. Having knowledge of SQL / RDBMS concepts Good analytical/ troubleshooting skills to cater the business requirements. Understanding on Agile process would be an added advantage. Effective team player with ability work autonomously and in team with cross-cultural environment. Effective verbal and written communication to work closely with all the stakeholders.
Posted 4 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill
Posted 1 month ago
7.0 - 10.0 years
2 - 6 Lacs
Pune
Work from Office
Responsibilities : - Design, develop, and deploy data pipelines using Databricks, including data ingestion, transformation, and loading (ETL) processes. - Develop and maintain high-quality, scalable, and maintainable Databricks notebooks using Python. - Work with Delta Lake and other advanced features. - Leverage Unity Catalog for data governance, access control, and data discovery. - Develop and optimize data pipelines for performance and cost-effectiveness. - Integrate with various data sources, including but not limited to databases and cloud storage (Azure Blob Storage, ADLS, Synapse), and APIs. - Experience working with Parquet files for data storage and processing. - Experience with data integration from Azure Data Factory, Azure Data Lake, and other relevant Azure services. - Perform data quality checks and validation to ensure data accuracy and integrity. - Troubleshoot and resolve data pipeline issues effectively. - Collaborate with data analysts, business analysts, and business stakeholders to understand their data needs and translate them into technical solutions. - Participate in code reviews and contribute to best practices within the team.
Posted 1 month ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 1 month ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be proficient in Databricks Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptoms Manages problems that require involvement of others to solve Reaches sound decisions quickly Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Roles Responsibilities Provides innovative and cost effective solution using databricks Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Learn adapt quickly to new Technologies as per the business need Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability Skills The Candidate must have 710 yrs of experience in databricks delta lake Hands on experience on Azure Experience on Python scripting Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Knowledge of Azure architecture and design Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 1 month ago
5.0 - 10.0 years
8 - 14 Lacs
Hyderabad
Work from Office
Job Title : Azure Synapse Developer Position Type : Permanent Experience : 5+ Years Location : Hyderabad (Work From Office / Hybrid) Shift Timings : 2 PM to 11 PM Mode of Interview : 3 rounds (Virtual/In-person) Notice Period : Immediate to 15 days Job Description : We are looking for an experienced Azure Synapse Developer to join our growing team. The ideal candidate should have a strong background in Azure Synapse Analytics, SSRS, and Azure Data Factory (ADF), with a solid understanding of data modeling, data movement, and integration. As an Azure Synapse Developer, you will work closely with cross-functional teams to design, implement, and manage data pipelines, ensuring the smooth flow of data across platforms. The candidate must have a deep understanding of SQL and ETL processes, and ideally, some exposure to Power BI for reporting and dashboard creation. Key Responsibilities : - Develop and maintain Azure Synapse Analytics solutions, ensuring scalability, security, and performance. - Design and implement data models for efficient storage and retrieval of data in Azure Synapse. - Utilize Azure Data Factory (ADF) for ETL processes, orchestrating data movement, and integrating data from various sources. - Leverage SSIS/SSRS/SSAS to build, deploy, and maintain data integration and reporting solutions. - Write and optimize SQL queries for data manipulation, extraction, and reporting. - Collaborate with business analysts and other stakeholders to understand reporting needs and create actionable insights. - Perform performance tuning on SQL queries, pipelines, and Synapse workloads to ensure high performance. - Provide support for troubleshooting and resolving data integration and performance issues. - Assist in setting up automated data processes and create reusable templates for data integration. - Stay updated on Azure Synapse features and tools, recommending improvements to the data platform as appropriate. Required Skills & Qualifications : - 5+ years of experience as a Data Engineer or Azure Synapse Developer. - Strong proficiency in Azure Synapse Analytics (Data Warehouse, Data Lake, and Analytics). - Solid understanding and experience in data modeling for large-scale data architectures. - Expertise in SQL for writing complex queries, optimizing performance, and managing large datasets. - Hands-on experience with Azure Data Factory (ADF) for data integration, ETL processes, and pipeline creation. - SSRS (SQL Server Reporting Services) and SSIS (SQL Server Integration Services) expertise. - Power BI knowledge (basic to intermediate) for reporting and data visualization. - Familiarity with SSAS (SQL Server Analysis Services) and OLAP concepts is a plus. - Experience in troubleshooting and optimizing complex data processing tasks. - Strong communication and collaboration skills to work effectively in a team-oriented environment. - Ability to quickly adapt to new tools and technologies in the Azure ecosystem.
Posted 1 month ago
10 - 12 years
13 - 20 Lacs
Kolkata
Work from Office
Key Responsibilities : - Understand the factories , manufacturing process , data availability and avenues for improvement - Brainstorm , together with engineering, manufacturing and quality problems that can be solved using the acquired data in the data lake platform. - Define what data is required to create a solution and work with connectivity engineers , users to collect the data - Create and maintain optimal data pipeline architecture. - Assemble large, complex data sets that meet functional / non-functional business requirements. - Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability - Work on data preparation, data deep dive , help engineering, process and quality to understand the process/ machine behavior more closely using available data - Deploy and monitor the solution - Work with data and analytics experts to strive for greater functionality in our data systems. - Work together with Data Architects and data modeling teams. Skills /Competencies : - Good knowledge of the business vertical with prior experience in solving different use cases in the manufacturing or similar industry - Ability to bring cross industry learning to benefit the use cases aimed at improving manufacturing process Problem Scoping/definition Skills : - Experience in problem scoping, solving, quantification - Strong analytic skills related to working with unstructured datasets. - Build processes supporting data transformation, data structures, metadata, dependency and workload management. - Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores - Ability to foresee and identify all right data required to solve the problem Data Wrangling Skills : - Strong skill in data mining, data wrangling techniques for creating the required analytical dataset - Experience building and optimizing 'big data' data pipelines, architectures and data sets - Adaptive mindset to improvise on the data challenges and employ techniques to drive desired outcomes Programming Skills : - Experience with big data tools: Spark, Delta, CDC, NiFi, Kafka, etc - Experience with relational SQL ,NoSQL databases and query languages, including oracle , Hive, sparkQL. - Experience with object-oriented languages: Scala, Java, C++ etc. Visualization Skills : - Know how of any visualization tools such as PowerBI, Tableau - Good storytelling skills to present the data in simple and meaningful manner Data Engineering Skills : - Strong skill in data analysis techniques to generate finding and insights by means of exploratory data analysis - Good understanding of how to transform and connect the data of various types and form - Great numerical and analytical skills - Identify opportunities for data acquisition - Explore ways to enhance data quality and reliability - Build algorithms and prototypes - Reformulating existing frameworks to optimize their functioning. - Good understanding of optimization techniques to make the system performant for requirements.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane