Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
4 - 6 Lacs
bengaluru
Work from Office
The Solution Architect Data Engineer will design, implement, and manage data solutions for the insurance business, leveraging expertise in Cognos, DB2, Azure Databricks, ETL processes, and SQL. The role involves working with cross-functional teams to design scalable data architectures and enable advanced analytics and reporting, supporting the company's finance, underwriting, claims, and customer service operations. Key Responsibilities: Data Architecture & Design: Design and implement robust, scalable data architectures and solutions in the insurance domain using Azure Databricks, DB2, and other data platforms. Data Integration & ETL Processes: Lead the development and optimization of ETL pipelines to extract, transform, and load data from multiple sources, ensuring data integrity and performance. Cognos Reporting: Oversee the design and maintenance of Cognos reporting systems, developing custom reports and dashboards to support business users in finance, claims, underwriting, and operations. Data Engineering: Design, build, and maintain data models, data pipelines, and databases to enable business intelligence and advanced analytics across the organization. Cloud Infrastructure: Develop and manage data solutions on Azure, including Databricks for data processing, ensuring seamless integration with existing systems (e.g., DB2, legacy platforms). SQL Development: Write and optimize complex SQL queries for data extraction, manipulation, and reporting purposes, with a focus on performance and scalability. Data Governance & Quality: Ensure data quality, consistency, and governance across all data solutions, implementing best practices and adhering to industry standards (e.g., GDPR, insurance regulations). Collaboration: Work closely with business stakeholders, data scientists, and analysts to understand business needs and translate them into technical solutions that drive actionable insights. Solution Architecture: Provide architectural leadership in designing data platforms, ensuring that solutions meet business requirements, are cost-effective, and can scale for future growth. Performance Optimization: Continuously monitor and tune the performance of databases, ETL processes, and reporting tools to meet service level agreements (SLAs). Documentation: Create and maintain comprehensive technical documentation including architecture diagrams, ETL process flows, and data dictionaries. Required Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Proven experience as a Solution Architect or Data Engineer in the insurance industry, with a strong focus on data solutions. Hands-on experience with Cognos (for reporting and dashboarding) and DB2 (for database management). Proficiency in Azure Databricks for data processing, machine learning, and real-time analytics. Extensive experience in ETL development, data integration, and data transformation processes. Strong knowledge of Python, SQL (advanced query writing, optimization, and troubleshooting). Experience with cloud platforms (Azure preferred) and hybrid data environments (on-premises and cloud). Familiarity with data governance and regulatory requirements in the insurance industry (e.g., Solvency II, IFRS 17). Strong problem-solving skills, with the ability to troubleshoot and resolve complex technical issues related to data architecture and performance. Excellent verbal and written communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Preferred Qualifications: Experience with other cloud-based data platforms (e.g., Azure Data Lake, Azure Synapse, AWS Redshift). Knowledge of machine learning workflows, leveraging Databricks for model training and deployment. Familiarity with insurance-specific data models and their use in finance, claims, and underwriting operations. Certifications in Azure Databricks, Microsoft Azure, DB2, or related technologies. Knowledge of additional reporting tools (e.g., Power BI, Tableau) is a plus. Key Competencies: Technical Leadership: Ability to guide and mentor development teams in implementing best practices for data architecture and engineering. Analytical Skills: Strong analytical and problem-solving skills, with a focus on optimizing data systems for performance and scalability. Collaborative Mindset: Ability to work effectively in a cross-functional team, communicating complex technical solutions in simple terms to business stakeholders. Attention to Detail: Meticulous attention to detail, ensuring high-quality data output and system performance.
Posted 23 hours ago
5.0 - 10.0 years
15 - 19 Lacs
bengaluru
Work from Office
Overview The Gen AI engineer will be part of a team which designs, builds, and operates the AI services of Siemens Healthineers (SHS). The ideal candidate for this role will have experience with AI services. The role will require the candidate to develop and maintain artificial intelligence systems and applications that help businesses and organizations solve complex problems. The role also requires expertise in machine learning, deep learning, natural language processing, computer vision, and other AI technologies. Task and Responsibilities: Generative AI Engineer is responsible for designing, architecting, and development of AI product/service. The main responsibilities include: Designing, developing, and deploying Azure-based AI solutions, including machine learning models, cognitive services, and data analytics solutions. Collaborating with cross-functional teams, such as data scientists, business analysts, and developers, to design and implement AI solutions that meet business requirements. Building and training machine learning models using Azure Machine Learning, and tuning models for optimal performance. Developing and deploying custom AI models using Azure Cognitive Services, such as speech recognition, language understanding, and computer vision. Creating data pipelines to collect, process, and prepare data for analysis and modeling using Azure data services, such as Azure Data Factory and Azure Databricks. Implementing data analytics solutions using Azure Synapse Analytics or other Azure data services. Deploying and managing Azure services and resources using Azure DevOps or other deployment tools. Monitoring and troubleshooting deployed solutions to ensure optimal performance and reliability. Ensuring compliance with security and regulatory requirements related to AI solutions. Staying up-to-date with the latest Azure AI technologies and industry developments, and sharing knowledge and best practices with the team. Qualifications: Overall 5+ years combined experience in IT and recent 3 years as AI engineer. Bachelor's or master's degree in computer science, information technology, or a related field. Experience in designing, developing, and delivering successful AI Services. Experience with cloud computing technologies, such Azure. Relevant industry certifications, such as Microsoft Certified: Azure AI Engineer, Azure Solution Architect etc. is a plus. Excellent written and verbal communication skills to collaborate with cross-functional teams and communicate technical information to non-technical stakeholders. Technical skills Proficiency in programming languages: You should be proficient in programming language such as Python and R. Experience of Azure AI services: You should have experience with Azure AI services such as Azure Machine Learning, Azure Cognitive Services, and Azure Databricks. Data handling and processing: You should be proficient in data handling and processing techniques such as data cleaning, data normalization, and feature extraction. Knowledge of SQL, NoSQL, and big data technologies such as Hadoop and Spark are also beneficial. Experience with Cloud platform: You should have a good understanding of cloud computing concepts and experience working with Azure services such as containers, Kubernetes, Web Apps, Azure Front Door, CDN, Web Application firewalls etc. DevOps and CI/CD: You should be familiar with DevOps practices and CI/CD pipelines. This includes tools such as Azure DevOps & Git. Security and compliance: You should be aware of security and compliance considerations when building and deploying AI models in the cloud. This includes knowledge of Azure security services, compliance frameworks such as HIPAA and GDPR, and best practices for securing data and applications in the cloud. Machine learning algorithms and frameworks: Knowledge of machine learning algorithms and frameworks such as TensorFlow, Keras, PyTorch, and Scikit-learn is a plus.
Posted 23 hours ago
3.0 - 5.0 years
13 - 17 Lacs
hyderabad
Work from Office
Key Responsibilities: Provide expert-level support for Azure Machine Learning, OpenAI, Cognitive Services, Synapse, and Data Factory . Troubleshoot real-time data integration pipelines, model deployments, inference performance, and security configurations. Guide customers in implementing Responsible AI , security governance, and model versioning best practices. Collaborate with engineering teams to resolve service issues and release platform updates. Proactively identify patterns in customer challenges and build reusable diagnostics or automations. Work with cross-functional teams (Product, Engineering, Customer Success) to improve support processes and tooling. Skillset: Strong foundation in Azure ML , OpenAI API , and Cognitive Services (Text, Vision, Language) . Experience with Synapse Analytics , Azure Data Factory , and Azure Databricks . Data engineering proficiency: SQL, PySpark, Power BI , data ingestion & transformation. Hands-on with Cosmos DB , Azure Storage Gen2, and secure data lake architecture. Machine learning pipeline troubleshooting and model monitoring techniques. Knowledge of Python, TensorFlow, PyTorch , and security practices in AI deployment. Proven experience with cloud deployment and development tools (i.e. PowerShell, CLI, REST API, Visual Studio, .NET, SDK, JSON, ARM/Bicep templates). Mandatory Skills: MS Azure MLOPs.Experience: 3-5 Years.
Posted 1 day ago
5.0 - 10.0 years
20 - 25 Lacs
noida, hyderabad, chennai
Work from Office
Adf , Databricks, Python, Pyspark, SQL, Azure Devops, Azure Synapse
Posted 1 day ago
7.0 - 12.0 years
15 - 25 Lacs
bengaluru
Hybrid
Aziro is hiring Azure Data Engineers !! Job Title: Senior Azure Data Engineer Experience Required: 7+ years Location: Bangalore (Hybrid) Notice Period - Immediate / Quick Joiners (15 to 20 days) Job Description: We are seeking a Senior Azure Data Engineer with hands-on expertise in Azure Databricks, Azure Data Factory (ADF), Python/PySpark, and SQL. The ideal candidate should have strong experience designing and building scalable data pipelines in the Azure cloud. Primary Responsibilities: Design and develop scalable ETL/ELT pipelines using ADF and Databricks (PySpark). Write efficient, production-grade PySpark and SQL code for data transformations. Build and maintain data models, ingestion frameworks, and processing pipelines. Collaborate with cross-functional teams to gather requirements and implement solutions. Ensure performance tuning, error handling, and monitoring of data workflows. Mandatory Skills: Strong hands-on experience with Azure Databricks and Azure Data Factory (ADF). Proficiency in Python and PySpark. Excellent SQL skills for data extraction, transformation, and analysis. Good understanding of data lake architecture and best practices. Good to Have / Secondary Skills: Experience with CI/CD pipelines and GitHub-based version control. Exposure to DevOps practices in data engineering. Knowledge of data quality frameworks and orchestration tools is a plus. Soft Skills: Strong analytical and problem-solving skills. Effective communication and stakeholder collaboration. Agile mindset and ability to adapt to evolving data needs.
Posted 1 day ago
5.0 - 7.0 years
6 - 10 Lacs
bengaluru
Work from Office
Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 day ago
6.0 - 10.0 years
5 - 9 Lacs
chennai
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 1 day ago
6.0 - 10.0 years
9 - 13 Lacs
hyderabad
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 1 day ago
7.0 - 12.0 years
17 - 30 Lacs
bengaluru, delhi / ncr, mumbai (all areas)
Work from Office
Job Title: Azure Data Architect Experience: More than 7 years Location: Pan India Employment Type: Full-Time Technology: SQL, ADF, ADLS, Synapse, Pyspark, Databricks, data modelling Key Responsibilities: Requirement gathering and analysis Design of data architecture and data model to ingest data Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Hands on experience on Azure functions and other components like realtime streaming etc Oversee Azure billing processes, conducting analyses to ensure cost-effectiveness and efficiency in data operations. Provide optimized solution for any problem related to data engineering Ability to work with verity of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables
Posted 1 day ago
8.0 - 10.0 years
3 - 7 Lacs
kolkata, pune
Work from Office
Job Title: ADF Engineer Work Location: PUNE(PRIORITY) & KOLKATA Skill Required: Digital : Microsoft Azure~Digital : Databricks~Azure Data Factory Experience Range in Required Skills: 8+ yrs (5-6years relevant to Azure services will be considered) Job Description: 5-6years relevant experience only will be considered. Should have PySpark, SQL, Azure Services (ADF, DataBricks, Synapse) Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks. Developing scalable and re-usable frameworks for ingesting data sets Integrating the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained. Working with event based / streaming technologies to ingest and process data. Working with other members of the project team to support delivery of additional project components (API interfaces, Search) Evaluating the performance and applicability of multiple tools against customer requirements Have knowledge on deployment framework such as CI/CD, GitHub check in process Able to perform data analytics, data analysis and data profiling Good communication
Posted 1 day ago
5.0 - 10.0 years
20 - 35 Lacs
hyderabad, pune, bengaluru
Hybrid
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Experience with building stream-processing systems, using Spark-Streaming Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services Azure Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 1 day ago
4.0 - 9.0 years
10 - 20 Lacs
hyderabad/secunderabad, pune, gurugram
Work from Office
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role : Microsoft(Azure) Fabric Lead/Developer Experience : 5+ Years Required skills: Azure, Microsoft(Azure) Fabric, Azure Fabric, Microsoft Fabric, Data Factory, Azure Data bricks, Azure DevOps, Azure Data Lake storage, SQL and Synapse data warehouse, Job Location: Hyderabad, Pune, Gurgaon. Role & responsibilities Define technical architecture for Microsoft Fabric data implementations and migrations Be the subject matter expert for migration from Azure and other coouds to Microsft Fabric Help design, develop, and solve issues with data pipelines that power the business applications with Microsoft Fabric technologies (OneLake, Data Factory, Synapse Analytics, etc.) Establish and champion engineering best practices: code evolution, metadata management, cloud DevOps, etc. Become the mentor leader for data engineers in the team, lead the technical aspects of building and maintaining a scalable reliable & secure data platform Maintain up-to-date knowledge of the data engineering technology advancements and best practices for new technology offerings Work with the practice CoE to define new offerings and services using Fabric Qualifications : At least 5+ years of experience in Azure data engineering project implementations At least one project implementation experience using Microsoft Fabric Knowledge and expertise in Microsoft Fabric technologies; OneLake, Data Factory, Synapse Analytics Strong hands-on experience with Python, PySpark, SQL. Conceptual understanding of data warehousing and data modeling processes.
Posted 1 day ago
4.0 - 9.0 years
10 - 20 Lacs
hyderabad, pune, gurugram
Work from Office
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role : Microsoft(Azure) Fabric Lead/Developer Experience : 5+ Years Required skills: Azure, Microsoft(Azure) Fabric, Azure Fabric, Microsoft Fabric, Data Factory, Azure Data bricks, Azure DevOps, Azure Data Lake storage, SQL and Synapse data warehouse, Job Location: Hyderabad, Pune, Gurgaon. Role & responsibilities Define technical architecture for Microsoft Fabric data implementations and migrations Be the subject matter expert for migration from Azure and other coouds to Microsft Fabric Help design, develop, and solve issues with data pipelines that power the business applications with Microsoft Fabric technologies (OneLake, Data Factory, Synapse Analytics, etc.) Establish and champion engineering best practices: code evolution, metadata management, cloud DevOps, etc. Become the mentor leader for data engineers in the team, lead the technical aspects of building and maintaining a scalable reliable & secure data platform Maintain up-to-date knowledge of the data engineering technology advancements and best practices for new technology offerings Work with the practice CoE to define new offerings and services using Fabric Qualifications : At least 5+ years of experience in Azure data engineering project implementations At least one project implementation experience using Microsoft Fabric Knowledge and expertise in Microsoft Fabric technologies; OneLake, Data Factory, Synapse Analytics Strong hands-on experience with Python, PySpark, SQL. Conceptual understanding of data warehousing and data modeling processes.
Posted 1 day ago
10.0 - 20.0 years
27 - 40 Lacs
hyderabad
Work from Office
Role & Responsibilities : 10 -18 years of relevant experience in architecting, designing, developing and delivering Data solutions both on premise and predominantly Azure Cloud Data & AI services. Batch, Real time and hybrid solutions with high velocity and large volumes. Experience with traditional big data framework like Hadoop will be beneficial. Experience in other cloud-based data solutions like AWS, GCP Architectural advisory for big data solutions with Azure and Microsoft technical stack Pre sales, account mining to find new opportunities and proposing right solutions to a given context Data warehousing concepts, dimensional modelling, tabular modelling, Start and Snowflake models, MDX/DAX etc. Strong technical knowledge including hands on most of, SQL, SQL Warehouse, Azure Data Factory, Azure Storage accounts, Data Lake, Data bricks, Azure Functions, Synapse, Stream Analytics, Power BI/Any other visualization Working with No SQL data bases like Cosmos DB Working with Various file formats, Storage types ETL and ELT based data orchestration for batch and real time data Strong Programming skills - experience and expertise in one of the following: Java, Python, Scala, C/.Net. Driving decisions collaboratively, resolving conflicts and ensuring follow through with exceptional verbal and written communication skills. Experience of working on real time end to end projects using Agile/Waterfall methodology and associated tolls Understand the client scenario, derive or understand business requirements and propose the right data solutions architecture using both on premise and cloud-based services Create scalable and efficient data architecture that respects the data integrity, quality, security, reuse among other aspects which lays foundation for present and future scalable solutions Understand and communicate not only data and technical terms but also functional and business terms with the stake holders. Focus on the true business value delivery with efficient architecture and technical solutions. Have an opinion and advise the clients in the right path aligned with their business, data and technical strategies while aligning with market trends in the data & AI solutions Establish Data Architecture with modern data driven principles like Flexibility at scale, parallel and distributed processing, democratized data access that enables them to be more productive. Thought leadership to provide Point of views, ideate and deliver Webinars, be the custodian of the best practices. Ensure that solution exhibits high levels of performance, security, scalability, maintainability, appropriate reusability and reliability upon deployment Maintain and upgrade technical skills, keeping up to date with market trends. Educate and guide both customers and fellow colleagues Expertise or knowledge in Visualization / Reporting like Qlik/Power BI/Tableau
Posted 1 day ago
10.0 - 14.0 years
40 - 45 Lacs
hyderabad
Work from Office
Skills: Cloudera, Big Data, Hadoop, SPARK, Kafka, Hive, CDH Clusters Design and implement Cloudera-based data platforms, including cluster sizing, configuration, and optimization. Install, configure, and administer Cloudera Manager and CDP clusters, managing all aspects of the cluster lifecycle. Monitor and troubleshoot platform performance, identifying and resolving issues promptly. Review the maintain the data ingestion and processing pipelines on the Cloudera platform. Collaborate with data engineers and data scientists to design and optimize data models, ensuring efficient data storage and retrieval. Implement and enforce security measures for the Cloudera platform, including authentication, authorization, and encryption. Manage platform user access and permissions, ensuring compliance with data privacy regulations and internal policies. Experience in creating Technology Road Maps for Cloudera Platform. Stay up-to-date with the latest Cloudera and big data technologies, and recommend and implement relevant updates and enhancements to the platform. Experience in Planning, testing, and executing upgrades involving Cloudera components and ensuring platform stability and security. Document platform configurations, processes, and procedures, and provide training and support to other team members as needed. Requirements Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Cloudera platform engineer or similar role, with a strong understanding of Cloudera Manager and CDH clusters. Expertise in designing, implementing, and maintaining scalable and high-performance data platforms using Cloudera technologies such as Hadoop, Spark, Hive, Kafka. Strong knowledge of big data concepts and technologies, data modeling, and data warehousing principles. Familiarity with data security and compliance requirements, and experience implementing security measures for Cloudera platforms. Proficiency in Linux system administration and scripting languages (e.g., Shell, Python). Strong troubleshooting and problem-solving skills, with the ability to diagnose and resolve platform issues quickly. Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. Experience on Azure Data Factory/Azure Databricks/Azure Synapse is a plus Timings: 10 am to 7.30 pm 2 days WFO and 3 days WFH.
Posted 1 day ago
5.0 - 7.0 years
10 - 14 Lacs
hyderabad
Work from Office
Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 day ago
6.0 - 11.0 years
11 - 21 Lacs
chennai
Work from Office
Azure ADF, Databricks with Pyspark/ Python Data Engineer with Pyspark, Databricks skillset 4+ years of experience Data Engineering with min 3+ years hands on experience in Pyspark for data transformations Sound Knowledge on - Azure Data Bricks, Azure Data Lake. Sound knowledge in Data Bricks & Logic Apps Delta Lake, Azure SQL and Azure blob storage, Azure logic apps, Azure Functions and Azure Synapse, Azure Purview. Extensive knowledge on big data concepts like Hive, and spark framework. Should be able to write complex SQL queries. Sound understanding of DWH concepts. Strong hands-on experience on Python or Scala. Hands on experience on Azure Data Factory. Should be able to coordinate independently with business stakeholders and understand the business requirements. Knowledge on DevOps and Agile methodologies-based projects, implement the requirements using ADF/Data Brick Knowledge of version control tools such as Git/Bitbucket. Should have basic understanding of Batch Account configuration, and various control and monitoring options.
Posted 1 day ago
6.0 - 11.0 years
11 - 21 Lacs
pune
Work from Office
Azure ADF, Databricks with Pyspark/ Python Data Engineer with Pyspark, Databricks skillset 4+ years of experience Data Engineering with min 3+ years hands on experience in Pyspark for data transformations Sound Knowledge on - Azure Data Bricks, Azure Data Lake. Sound knowledge in Data Bricks & Logic Apps Delta Lake, Azure SQL and Azure blob storage, Azure logic apps, Azure Functions and Azure Synapse, Azure Purview. Extensive knowledge on big data concepts like Hive, and spark framework. Should be able to write complex SQL queries. Sound understanding of DWH concepts. Strong hands-on experience on Python or Scala. Hands on experience on Azure Data Factory. Should be able to coordinate independently with business stakeholders and understand the business requirements. Knowledge on DevOps and Agile methodologies-based projects, implement the requirements using ADF/Data Brick Knowledge of version control tools such as Git/Bitbucket. Should have basic understanding of Batch Account configuration, and various control and monitoring options.
Posted 1 day ago
6.0 - 11.0 years
11 - 21 Lacs
bengaluru
Work from Office
Azure ADF, Databricks with Pyspark/ Python Data Engineer with Pyspark, Databricks skillset 4+ years of experience Data Engineering with min 3+ years hands on experience in Pyspark for data transformations Sound Knowledge on - Azure Data Bricks, Azure Data Lake. Sound knowledge in Data Bricks & Logic Apps Delta Lake, Azure SQL and Azure blob storage, Azure logic apps, Azure Functions and Azure Synapse, Azure Purview. Extensive knowledge on big data concepts like Hive, and spark framework. Should be able to write complex SQL queries. Sound understanding of DWH concepts. Strong hands-on experience on Python or Scala. Hands on experience on Azure Data Factory. Should be able to coordinate independently with business stakeholders and understand the business requirements. Knowledge on DevOps and Agile methodologies-based projects, implement the requirements using ADF/Data Brick Knowledge of version control tools such as Git/Bitbucket. Should have basic understanding of Batch Account configuration, and various control and monitoring options.
Posted 1 day ago
6.0 - 9.0 years
9 - 13 Lacs
hyderabad
Work from Office
About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 1 day ago
7.0 - 12.0 years
10 - 20 Lacs
gurugram
Hybrid
• Lead the data engineering team by driving the design, build, test, and launch of new data pipelines and models on the production data platform
Posted 1 day ago
5.0 - 8.0 years
6 - 10 Lacs
telangana
Work from Office
Key Responsibilities: Team Leadership: Lead and mentor a team of Azure Data Engineers, providing technical guidance and support. Foster a collaborative and innovative team environment. Conduct regular performance reviews and set development goals for team members. Organize training sessions to enhance team skills and technical capabilities. Azure Data Platform: Design, implement, and optimize scalable data solutions using Azure data services such as Azure Databricks, Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics. Ensure data engineering best practices and data governance are followed. Stay up-to-date with Azure data technologies and recommend improvements to enhance data processing capabilities. Data Architecture: Collaborate with data architects to design efficient and scalable data architectures. Define data modeling standards and ensure data integrity, security, and governance compliance. Project Management: Work with project managers to define project scope, goals, and deliverables. Develop project timelines, allocate resources, and track progress. Identify and mitigate risks to ensure successful project delivery. Collaboration & Communication: Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to deliver data-driven solutions. Communicate effectively with stakeholders to understand requirements and provide updates. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Team Lead or Manager in data engineering. Extensive experience with Azure data services and cloud technologies. Expertise in Azure Databricks, PySpark, and SQL. Strong understanding of data engineering best practices, data modeling, and ETL processes. Experience with agile development methodologies. Certifications in Azure data services (preferred). Preferred Skills: Experience with big data technologies and data warehousing solutions. Familiarity with industry standards and compliance requirements. Ability to lead and mentor a team.
Posted 1 day ago
8.0 - 13.0 years
8 - 13 Lacs
telangana
Work from Office
Key Responsibilities: Team Leadership: Lead and mentor a team of Azure Data Engineers, providing technical guidance and support. Foster a collaborative and innovative team environment. Conduct regular performance reviews and set development goals for team members. Organize training sessions to enhance team skills and technical capabilities. Azure Data Platform: Design, implement, and optimize scalable data solutions using Azure data services such as Azure Databricks, Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics. Ensure data engineering best practices and data governance are followed. Stay up-to-date with Azure data technologies and recommend improvements to enhance data processing capabilities. Data Architecture: Collaborate with data architects to design efficient and scalable data architectures. Define data modeling standards and ensure data integrity, security, and governance compliance. Project Management: Work with project managers to define project scope, goals, and deliverables. Develop project timelines, allocate resources, and track progress. Identify and mitigate risks to ensure successful project delivery. Collaboration & Communication: Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to deliver data-driven solutions. Communicate effectively with stakeholders to understand requirements and provide updates. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Team Lead or Manager in data engineering. Extensive experience with Azure data services and cloud technologies. Expertise in Azure Databricks, PySpark, and SQL. Strong understanding of data engineering best practices, data modeling, and ETL processes. Experience with agile development methodologies. Certifications in Azure data services (preferred). Preferred Skills: Experience with big data technologies and data warehousing solutions. Familiarity with industry standards and compliance requirements. Ability to lead and mentor a team.
Posted 1 day ago
6.0 - 10.0 years
9 - 13 Lacs
bengaluru
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 1 day ago
3.0 - 5.0 years
13 - 15 Lacs
hyderabad, gurugram, chennai
Work from Office
Classic pipeline Powershell Yaml Biceps Arm Templateterraform/ Biceps CI/CD Experience with data lake and analytics technologies in Azure (e.g., Azure Data Lake Storage, Azure Data Factory, Azure Databricks)- most important Data background with Azure & Powershell. Location: Chennai, Hyderabad, Kolkata, Pune, Ahmedabad, Remote
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City