Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 20.0 years
50 - 70 Lacs
Hyderabad
Work from Office
Role & responsibilities Engineering Director role, most critical technical experiences are around scalable system design, modern data architecture, and team enablement. Experience with languages like Java is key for backend systems, while Python remains important for orchestration and analytics workloads. From a tooling standpoint, familiarity with Kubernetes, Terraform, and observability stacks (e.g. DataDog, Grafana) is essential for operational excellence. On the data side, platforms like Snowflake, Databricks, or lakehouses are important for most modern pipelines. A Director should be comfortable working and evolving data architecture decisions around these with recommendations from Architects. Additionally, privacy and security are becoming first-class concerns so experience in basic data access controls, and compliance policies (GDPR, CCPA) is a strong differentiator. Finally, the ability to mentor engineers and guide technical ad tech business strategy across teams including cross-functional stakeholders in data science, customer success, and measurement is an important characteristic in driving long-term success.
Posted 1 hour ago
6.0 - 11.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Primary Responsibilities: Develop visual reports, dashboards and KPI scorecards using Power BI desktop. Build Analysis Services reporting models. Connect to data sources, importing data and transforming data for Business Intelligence. Implement row level security on data and understand application security layer models in Power BI. Integrate Power BI reports into other applications using embedded analytics like Power BI service (SaaS), or by API automation. Use advance level calculations on the data set. Design and develop Azure-based data centric application to manage large healthcare data application Design, build, test and deploy streaming pipelines for data processing in real time and at scale Create ETL packages Make use of Azure cloud services in ingestion and data processing Own feature development using Microsoft Azure native services like App Service, Azure Function, Azure Storage, Service Bus queues, Event Hubs, Event Grid, Application Gateway, Azure SQL, Azure DataBricks, etc Identify opportunities to fine-tune and optimize applications running on Microsoft Azure, cost reduction, adoption of best cloud practices, data and application security covering scalability and high availability Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects of Microsoft Azure Focus on automation using Infrastructure as a Code (IaaC), Jenkins, Azure DevOps, Terraform, etc. Communicate effectively with other engineers and QA Establish, refine and integrate development and test environment tools and software as needed Identify production and non-production application issues Senior Cloud Data Engineer Position with about 7+ Years of hands-on technical Experience in the Data processing, reporting and Cloud technologies. Working Knowledge of executing the projects in the Agile Methodologies. 1. Required Skills 1. Be able to envision the overall solution for defined functional and non-functional requirements; and be able to define technologies, patterns and frameworks to materialize it. 2. Design and develop the framework of the system and be able to explain choices made. Also write and review design document explaining overall architecture, framework and high level design of the application. 3. Create, understand and validate Design and estimated effort for given module/task, and be able to justify it. 4. Be able to define in-scope, out-of-scope and taken assumptions while creating effort estimates. 5. Be able to identify and integrate well over all integration points in context of a project as well as other applications in the environment. 6. Understand the business requirements and develop data models Technical Skills: 1. Strong proficiency as a Cloud Data Engineer utilizing Power BI and Azure Data Bricks to support as well as design, develop and deploy requested updates to new and existing cloud-based services. 2. Experience with developing, implementing, monitoring and troubleshooting applications in the Azure Public Cloud. 3. Proficiency in Data Modeling and reporting 4. Design and implement database schema 5. Design and development of well documented source code. 6. Development of both unit testing and system testing scripts that will be incorporated into the QA process. 7. Automating all deployment steps with Infrastructure as Code (IAC) and Jenkins Pipeline as Code (JPaC) concepts. 8. Define guidelines and benchmarks for NFR considerations during project implementation. 9. Do required POCs to make sure that suggested design/technologies meet the requirements. . Required Experience: 5+ to 10+ years of professional experience developing SQL, Power BI, SSIS and Azure Data Bricks. 5+ to 10+ years of professional experience utilizing SQL Server for data storage in large-scale .NET solutions. Strong technical writing skills. Strong knowledge of build/deployment/unit testing tools. Highly motivated team player and a self-starter. Excellent verbal, phone, and written communication skills. Knowledge of Cloud-based architecture and concepts. Required Qualifications: Graduate or Post Graduate in Computer Science /Engineering/Science/Mathematics or related field with around 10 years of experience in executing the Data Reporting solutions Cloud Certification, preferably Azure
Posted 1 hour ago
8.0 - 13.0 years
15 - 25 Lacs
Kolkata
Work from Office
Design and implement data architecture solutions that align with business requirements Develop and maintain data models, data dictionaries, and data flow diagrams.
Posted 2 hours ago
6.0 - 11.0 years
11 - 21 Lacs
Udaipur, Jaipur, Bengaluru
Work from Office
Data Architect Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency. Role: Data Architect Experience: 6-10 Yrs Location: Udaipur , Jaipur, Bangalore Domain: Telecom Job Description: We are seeking an experienced Telecom Data Architect to join our team. In this role, you will be responsible for designing comprehensive data architecture and technical solutions specifically for telecommunications industry challenges, leveraging TM forum frameworks and modern data platforms. You will work closely with customers, and technology partners to deliver data solutions that address complex telecommunications business requirements including customer experience management, network optimization, revenue assurance, and digital transformation initiatives. Key Responsibilities: Design and articulate enterprise-scale telecom data architectures incorporating TMforum standards and frameworks, including SID (Shared Information/Data Model), TAM (Telecom Application Map), and eTOM (enhanced Telecom Operations Map) Develop comprehensive data models aligned with TMforum guidelines for telecommunications domains such as Customer, Product, Service, Resource, and Partner management Create data architectures that support telecom-specific use cases including customer journey analytics, network performance optimization, fraud detection, and revenue assurance Design solutions leveraging Microsoft Azure and Databricks for telecom data processing and analytics Conduct technical discovery sessions with telecom clients to understand their OSS/BSS architecture, network analytics needs, customer experience requirements, and digital transformation objectives Design and deliver proof of concepts (POCs) and technical demonstrations showcasing modern data platforms solving real-world telecommunications challenges Create comprehensive architectural diagrams and implementation roadmaps for telecom data ecosystems spanning cloud, on-premises, and hybrid environments Evaluate and recommend appropriate big data technologies, cloud platforms, and processing frameworks based on telecom-specific requirements and regulatory compliance needs. Design data governance frameworks compliant with telecom industry standards and regulatory requirements (GDPR, data localization, etc.) Stay current with the latest advancements in data technologies including cloud services, data processing frameworks, and AI/ML capabilities Contribute to the development of best practices, reference architectures, and reusable solution components for accelerating proposal development Required Skills: 6-10 years of experience in data architecture, data engineering, or solution architecture roles with at least 5 years in telecommunications industry Deep knowledge of TMforum frameworks including SID (Shared Information/Data Model), eTOM, TAM, and their practical implementation in telecom data architectures Demonstrated ability to estimate project efforts, resource requirements, and implementation timelines for complex telecom data initiatives Hands-on experience building data models and platforms aligned with TMforum standards and telecommunications business processes Strong understanding of telecom OSS/BSS systems, network management, customer experience management, and revenue management domains Hands-on experience with data platforms including Databricks, and Microsoft Azure in telecommunications contexts Experience with modern data processing frameworks such as Apache Kafka, Spark and Airflow for real-time telecom data streaming Proficiency in Azure cloud platform and its respective data services with an understanding of telecom-specific deployment requirements Knowledge of system monitoring and observability tools for telecommunications data infrastructure Experience implementing automated testing frameworks for telecom data platforms and pipelines Familiarity with telecom data integration patterns, ETL/ELT processes, and data governance practices specific to telecommunications Experience designing and implementing data lakes, data warehouses, and machine learning pipelines for telecom use cases Proficiency in programming languages commonly used in data processing (Python, Scala, SQL) with telecom domain applications Understanding of telecommunications regulatory requirements and data privacy compliance (GDPR, local data protection laws) Excellent communication and presentation skills with ability to explain complex technical concepts to telecom stakeholders Strong problem-solving skills and ability to think creatively to address telecommunications industry challenges Good to have TMforum certifications or telecommunications industry certifications Relevant data platform certifications such as Databricks, Azure Data Engineer are a plus Willingness to travel as required Educational Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm
Posted 3 hours ago
8.0 - 12.0 years
5 - 10 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Teradata to Snowflake and Databricks on Azure Cloud,data migration projects, including complex migrations to Databricks,Strong expertise in ETL pipeline design and optimization, particularly for cloud environments and large-scale data migration
Posted 3 hours ago
12.0 - 20.0 years
0 - 0 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
We at EMIDS, are hiring for Sr. Data Architect Role Please find the details below and share your interest at aarati.pardhi@emids.com Job Description : We are looking for a highly experienced Senior Data Architect with a strong background in big data technologies, cloud platforms, advanced analytics, and AI. The ideal candidate will lead the end-to-end design and architecture of scalable, high-performance data platforms using PySpark, Databricks, and major cloud platforms (Azure/AWS/GCP). A strong understanding of AI/ML pipeline integration and enterprise data strategy is essential. Key Responsibilities: Should have experience on Building Data and AI Products Lead the data architecture design across modern data platforms using PySpark, Databricks, and cloud-native technologies. Define data models, data flow, and architecture blueprints aligned with business and analytical requirements. Architect and optimize big data pipelines and AI/ML workflows, ensuring performance, scalability, and reliability. Collaborate with business stakeholders, data scientists, and engineers to enable advanced analytics and predictive modeling capabilities. Design and implement data lakehouses, ingestion frameworks, and transformation layers. Provide technical leadership and mentoring to data engineers and developers. Drive adoption of data governance, security, and metadata management practices. Evaluate emerging technologies and recommend tools to support enterprise data strategies.
Posted 3 hours ago
2.0 - 5.0 years
5 - 8 Lacs
Chennai
Hybrid
Role & responsibilities We are seeking a skilled SQL Developer with strong experience in Databricks and Power BI to join our data engineering and analytics team. The ideal candidate will have a solid foundation in SQL development, hands-on experience with Databricks for data processing, and proficiency in creating insightful dashboards using Power BI. Key Responsibilities: Design, develop, and optimize SQL queries, stored procedures, and data pipelines. Develop and maintain scalable data workflows using Azure Databricks . Integrate, transform, and consolidate data from various sources into data warehouses or lakes. Create, manage, and publish interactive dashboards and reports using Power BI . Work closely with data engineers, analysts, and business stakeholders to understand requirements and translate them into data solutions. Ensure data quality, integrity, and security in all deliverables. Troubleshoot performance issues and recommend solutions to optimize data processing and reporting performance. Required Skills: Strong proficiency in SQL , including query optimization and data modeling. Hands-on experience with Databricks (preferably on Azure Databricks ). Proficiency in Power BI dashboard creation, DAX, Power Query, data visualization. Familiarity with ETL/ELT processes and tools. Experience with cloud platforms (preferably Azure ). Understanding of data warehousing concepts and architecture.
Posted 4 hours ago
6.0 - 9.0 years
22 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
We are looking for "Sr. Azure DevOps Engineer" with Minimum 6 years experience Contact- Atchaya (95001 64554) Required Candidate profile Exp in DevOps’s role, Data bricks, Terraform, Ansible, API Troubleshooting Azure platform issues. Snowflake provisioning and configuration skills
Posted 5 hours ago
5.0 - 8.0 years
7 - 17 Lacs
Chennai
Work from Office
Key Responsibilities : Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB, etc.). Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Perform data manipulation and analysis using Pandas, NumPy, and related Python libraries Develop and maintain high-performance REST APIs using FastAPI or Flas Ensure data integrity, quality, and availability across various sources Integrate data workflows with application components to support real-time or scheduled processes Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Communicate technical concepts effectively to non-technical stakeholders Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. Expertise in data modeling and design (relational, dimensional, NoSQL). Proven experience with ETL/ELT processes, data lakes, and modern lake house architectures. Solid experience with SQL Server or any major RDBMS; ability to write complex queries and stored procedures 3+ years of experience with Azure Data Factory, Azure Databricks, and PySpark Strong programming skills in Python, with solid understanding of Pandas and NumPy Proven experience in building REST APIs Good knowledge of data formats (JSON, Parquet, Avro) and API communication patterns Strong knowledge of data governance, security, and compliance frameworks. Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). Familiarity with BI and analytics tools such as Power BI or Tableau. Strong problem-solving skills and attention to performance, scalability, and security Excellent communication skills both written and verbal Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership.
Posted 6 hours ago
7.0 - 12.0 years
20 - 27 Lacs
Bengaluru
Work from Office
TECHNICAL SKILLS AND EXPERIENCE Most important: 7+ years professional experience as a data engineer, with at least 4 utilizing cloud technologies. Proven experience building ETL or ETL data pipelines with Databricks either in Azure or AWS using PySpark language. Strong experience with the Microsoft Azure Data Stack (Databricks, Data Lake Gen2, ADF etc.) Strong SQL skills and proficiency in Python adhering to standards such as PEP Proven experience with unit testing and applying appropriate testing methodologies using libraries such as Pytest, Great Expectations, or similar. Demonstrable experience with CICD including release and test automation tools and processes such as Azure Devops, Terraform, Powershell and Bash scripting or similar. Strong understanding of data modeling, data warehousing, and OLAP concepts. Excellent technical documentation skills. Preferred candidate profile
Posted 6 hours ago
8.0 - 10.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Databricks Architect Should have minimum of 10+ years of experience Must have skills DataBricks, Delta Lake, pyspark or scala spark, Unity Catalog Good to have skills - Azure and/or AWS Cloud Handson exposure in o Strong experience with the use of databricks as lakehouse solution o Establish the Databricks Lakehouse architecture o To ingest and transform batch and streaming data on the Databricks Lakehouse Platform. o Orchestrate diverse workloads for the full lifecycle including Delta Live Tables, PySpark etc Mandatory Skills: DataBricks - Data Engineering. Experience8-10 Years.
Posted 7 hours ago
10.0 - 14.0 years
30 - 37 Lacs
Noida
Hybrid
Required Qualifications: Undergraduate degree or equivalent experience 5+ years of work experience on Big Data skills 5+ years of experience managing the team 5+ years of work experience on people management skills 3+ years of work experience on Azure Cloud skills Experience or knowledge Azure Cloud, Databricks, Terraform, CI/CD, Spark, Scala, Java, Hbase, Hive, Sqoop, GitHub, Jenkins, Elastic Search, Grafana, UNIX, SQL, OpenShift, Kubernetes and Oozie etc. Solid technical knowledge and work experience on Big Data skills and Azure Cloud skills Primary Responsibilities: Designing and developing large-scale data processing systems. Use the expertise in big data technologies to ensure that the systems are efficient, scalable, and secure Ensuring that the developed systems are running smoothly. Monitor system performance, diagnose and troubleshoot issues, and make necessary changes to optimize system performance Processing, cleaning, and integrating large data sets from various sources to ensure that the data is accurate, complete, and consistent Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders. Collaborate with these teams to ensure that the systems they develop meet the organizations requirements and can support its goals Collaborating closely with senior stakeholders to understand business requirements and effectively translate them into technical requirements for the development team Planning and documenting comprehensive technical specifications for features or system design, ensuring a clear roadmap for development and implementation Designing, building, and configuring applications to meet business process and application requirements, leveraging your technical expertise and problem-solving skills Directing the development team in all aspects of the software development life cycle, including design, development, coding, testing, and debugging, to deliver high-quality solutions Writing testable, scalable, and efficient code, leading by example, and setting coding standards for the team Conducting code reviews and providing constructive feedback to ensure code quality and adherence to best practices Mentoring and guiding junior team members, fostering their professional growth, and encouraging the adoption of industry best practices Ensuring that software quality standards are met by enforcing code standards, conducting rigorous testing, and implementing continuous improvement processes Staying updated with the latest technologies and industry trends, continuously enhancing technical skills, and driving innovation within the development team Set and communicate team priorities that support the broader organization's goals. Align strategy, processes, and decision-making across teams Set clear expectations with individuals based on their level and role and aligned to the broader organization's goals. Meet regularly with individuals to discuss performance and development and provide feedback and coaching Develop the long-term technical vision and roadmap within, and often beyond, the scope of your teams. Evolve the roadmap to meet anticipated future requirements and infrastructure needs. Identify, navigate, and overcome technical and organizational barriers that may stand in the way of delivery Constantly improve the processes and practices around development and delivery Always think customer first, including striving to outperform their expectations Effectively work with Product Managers, Program Managers and other stakeholders to ensure the customer is benefiting from the work Foster and facilitate Agile methodologies globally and work in an agile environment using SCRUM or Kanban Work with Program Managers/leads to consume product backlog and generate technical design Leading by example on design and development of platform features Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Posted 8 hours ago
0.0 - 5.0 years
0 Lacs
Pune
Remote
The candidate must be proficient in Python, libraries and frameworks. Good with Data Modeling, Pyspark, MySQL concepts, Power BI, AWS, Azure concepts Experience in optimizing large transactional DBs Data, visualization tools, Databricks, fast API.
Posted 23 hours ago
7.0 - 12.0 years
16 - 31 Lacs
Pune, Delhi / NCR, Mumbai (All Areas)
Hybrid
Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities: Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications: 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience: Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM.
Posted 2 days ago
4.0 - 9.0 years
10 - 20 Lacs
Kolkata, Pune, Bengaluru
Work from Office
About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description Job Title : Azure Data Engineer Qualification : Any Graduate or above Relevant Experience : 4 to 10 yrs Must Have Skills : Azure, ADB, PySpark Roles and Responsibilites: Strong experience in implementation and management of lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL) . Strong hands-on expertise with SQL, Python, Apache Spark and Delta Lake. Proficiency in data integration techniques, ETL processes and data pipeline architectures. Demonstrable experience using GIT and building CI/CD pipelines for code management. Develop and maintain technical documentation for the platform. Ensure the platform is developed with software engineering, data analytics and data security practices in mind. Developing and optimizing data processing and data storage systems, ensuring high performance, reliability, and security. Experience working in Agile Methodology and well-knowledgeable in using ADO Boards for Sprint deliveries. Excellent communication skills and able to communicate clearly technical and business concepts both verbally and in writing. Ability to work in a team environment and collaborate with all the levels effectively by sharing ideas and knowledge. Location : Kolkata, Pune, Mumbai, Bangalore, BBSR Notice period : Immediate / 90 days Shift Timing : General Shift Mode of Interview : Virtual Mode of Work : WFO Thanks & Regards Bhavana B Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Direct Number:8067432454 bhavana.b@blackwhite.in |www.blackwhite.in
Posted 2 days ago
12.0 - 16.0 years
1 - 1 Lacs
Hyderabad
Remote
Were Hiring: Azure Data Factory (ADF) Developer Hyderabad Location: Onsite at Canopy One Office, Hyderabad/Remote Type: Full-time/Partime/Contract | Offshore role | Must be available to work in Eastern Time Zone (EST) We’re looking for an experienced ADF Developer to join our offshore team supporting a major client. This role focuses on building robust data pipelines using Azure Data Factory (ADF) and working closely with client stakeholders for transformation logic and data movement. Key Responsibilities Design, build, and manage ADF data pipelines Implement transformations and aggregations based on mappings provided Work with data from the bronze (staging) area, pre-loaded via Boomi Collaborate with client-side data managers (based in EST) to deliver clean, reliable datasets Requirements Proven hands-on experience with Azure Data Factory Strong understanding of ETL workflows and data transformation Familiarity with data staging/bronze layer concepts Willingness to work in Eastern Time Zone (EST) hours Preferred Qualifications Knowledge of Kimball Data Warehousing (huge advantage!) Experience working in an offshore coordination model Exposure to Boomi is a plus Role & responsibilities Preferred candidate profile
Posted 2 days ago
5.0 - 7.0 years
18 - 25 Lacs
Hyderabad
Work from Office
DataBricks , Azure, Big query (Need to be good with SQL ) , Python , Familiar with Data science concepts or implementations .
Posted 2 days ago
3.0 - 8.0 years
6 - 14 Lacs
Ahmedabad
Work from Office
Role & responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Preferred candidate profile Bachelor's and/or masters degree in computer science or equivalent experience. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail
Posted 2 days ago
5.0 - 10.0 years
9 - 19 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Key Responsibilities: ¢ Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. €¢ Build and operate very large data warehouses or data lakes. €¢ ETL optimization, designing, coding, & tuning big data processes using Apache Spark. €¢ Build data pipelines & applications to stream and process datasets at low latencies. €¢ Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience: €¢ Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake. €¢ Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. Email at- maya@mounttalent.com
Posted 2 days ago
7.0 - 10.0 years
19 - 27 Lacs
Hyderabad
Hybrid
We are seeking a highly skilled and experienced Data Engineering to join our team. The ideal candidate will have a strong background in programming, data management, and cloud infrastructure, with a focus on designing and implementing efficient data solutions. This role requires a minimum of 5+ years of experience and a deep understanding of Azure services and infrastructure and ETL/ELT solutions. Key Responsibilities: Azure Infrastructure Management: Own and maintain all aspects of Azure infrastructure, recommending modifications to enhance reliability, availability, and scalability. Security Management: Manage security aspects of Azure infrastructure, including network, firewall, private endpoints, encryption, PIM, and permissions management using Azure RBAC and Databricks roles. Technical Troubleshooting: Diagnose and troubleshoot technical issues in a timely manner, identifying root causes and providing effective solutions. Infrastructure as Code: Create and maintain Azure Infrastructure as Code using Terraform and GitHub Actions. CI/CD Pipelines: Configure and maintain CI/CD pipelines using GitHub Actions for various Azure services such as ADF, Databricks, Storage, and Key Vault. Programming Expertise: Utilize your expertise in programming languages such as Python to develop and maintain data engineering solutions. Generative AI and Language Models: Knowledge of Language Models (LLMs) and Generative AI is a plus, enabling the integration of advanced AI capabilities into data workflows. Real-Time Data Streaming: Use Kafka for real-time data streaming and integration, ensuring efficient data flow and processing. Data Management: Proficiency in Snowflake for data wrangling and management, optimizing data structures for analysis. DBT Utilization: Build and maintain data marts and views using DBT, ensuring data is structured for optimal analysis. ETL/ELT Solutions: Design ETL/ELT solutions using tools like Azure Data Factory and Azure Databricks, leveraging methodologies to acquire data from various structured or semi-structured source systems. Communication: Strong communication skills to explain technical issues and solutions clearly to the Engineering Lead and key stakeholders (as required) Qualifications: Minimum of 5+ years of experience in designing ETL/ELT solutions using tools like Azure Data Factory and Azure Databricks.OR Snowflake Expertise in programming languages such as Python. Experience with Kafka for real-time data streaming and integration. Proficiency in Snowflake for data wrangling and management. Proven ability to use DBT to build and maintain data marts and views. Experience in creating and maintaining Azure Infrastructure as Code using Terraform and GitHub Actions. Ability to configure, set up, and maintain GitHub for various code repositories. Experience in creating and configuring CI/CD pipelines using GitHub Actions for various Azure services. In-depth understanding of managing security aspects of Azure infrastructure. Strong problem-solving skills and ability to diagnose and troubleshoot technical issues. Excellent communication skills for explaining technical issues and solutions.
Posted 2 days ago
5.0 - 8.0 years
15 - 25 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
We are looking for Azure Administrator for Bangalore / Hyderabad / Chennai / Gurgaon Experience : 5 to 8 Years Location : Bangalore / Hyderabad / Chennai / Gurgaon NP : Immediate to 15 Days only Please send updated resume below mail ID : sumanta.majumdar@infinite.com JD : We are seeking a skilled and proactive Azure Databricks Administrator to manage, monitor, and support our Databricks environment on Microsoft Azure. The ideal candidate will be responsible for system integrations, access control, user support, and CI/CD pipeline administration, ensuring a secure, efficient, and scalable data platform. Key Responsibilities: - System Integration & Monitoring: - Build, monitor, and support integrations between Databricks and enterprise systems such as LogRhythm, ServiceNow, and AppDynamics. - Ensure seamless data flow and alerting mechanisms across integrated platforms. - Security & Access Management: - Administer user and group access to the Databricks environment. - Implement and enforce security policies and role-based access controls (RBAC). - User Support & Enablement: - Provide initial system support and act as a point of contact (POC) for Databricks users. - Assist users with onboarding, workspace setup, and troubleshooting. - Vendor Coordination: - Engage with Databricks vendor support for issue resolution and platform optimization. - Platform Monitoring & Maintenance: - Monitor Databricks usage, performance, and cost. - Ensure the platform is up-to-date with the latest patches and features. - Database & CI/CD Administration: - Manage Databricks database configurations and performance tuning. - Administer and maintain CI/CD pipelines for Databricks notebooks and jobs. Required Skills & Qualifications: - Proven experience administering Azure Databricks in a production environment. - Strong understanding of Azure services, data engineering workflows, and DevOps practices. - Experience with integration tools and platforms like LogRhythm, ServiceNow, and AppDynamics. - Proficiency in CI/CD tools (e.g., Azure DevOps, GitHub Actions). - Familiarity with Databricks REST APIs, Terraform, or ARM templates is a plus. - Excellent problem-solving, communication, and documentation skills.
Posted 3 days ago
5.0 - 8.0 years
9 - 19 Lacs
Hyderabad
Work from Office
We are looking for a skilled Data Engineer with strong expertise in Python, PySpark, SQL, AWS and Data Bricks to join our data engineering team. The ideal candidate will be responsible for building scalable data pipelines, transforming large datasets, and enabling data-driven decision-making across the organization. Role & responsibilities Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. Automation and Workflow Management: Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Preferred candidate profile Strong proficiency in Python for scripting and data manipulation. Hands-on experience with PySpark for distributed data processing. Proficient in writing complex SQL queries for large-scale data extraction and transformation. Solid understanding and experience with AWS cloud ecosystem (especially S3, Glue, EMR, Lambda). Knowledge of data warehousing, data lakes, and ETL/ELT processes. Familiarity with version control tools like Git and workflow orchestration tools (e.g., Airflow) is a plus. Location - Hyderabad (Work From Office) Notice - Immediate or 15days
Posted 3 days ago
5.0 - 8.0 years
9 - 19 Lacs
Hyderabad
Work from Office
We are looking for a skilled Data Engineer with strong expertise in Python, PySpark, SQL, AWS and Data Bricks to join our data engineering team. The ideal candidate will be responsible for building scalable data pipelines, transforming large datasets, and enabling data-driven decision-making across the organization. Role & responsibilities Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. Automation and Workflow Management: Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Preferred candidate profile Strong proficiency in Python for scripting and data manipulation. Hands-on experience with PySpark for distributed data processing. Proficient in writing complex SQL queries for large-scale data extraction and transformation. Solid understanding and experience with AWS cloud ecosystem (especially S3, Glue, EMR, Lambda). Knowledge of data warehousing, data lakes, and ETL/ELT processes. Familiarity with version control tools like Git and workflow orchestration tools (e.g., Airflow) is a plus. Location - Hyderabad (Work From Office) Notice - Immediate or 15days
Posted 3 days ago
8.0 - 13.0 years
25 - 30 Lacs
Chennai
Work from Office
Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide, Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves, The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures, This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States, Responsibilities. Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage), Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security, Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e-g. DOMO, Looker, Databricks). Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards, Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability, Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development, Requirements. Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e-g. Mathematics, Statistics, Engineering), 3+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e-g. Professional Cloud Developer, Professional Cloud Database Engineer), Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows, Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion, Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer, View our privacy policy, including our privacy notice to California residents here: https://www,five9,/pt-pt/legal, Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9, Show more Show less
Posted 3 days ago
8.0 - 13.0 years
25 - 30 Lacs
Chennai
Work from Office
Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide, Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves, The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures, This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States, Responsibilities. Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage), Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security, Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e-g. DOMO, Looker, Databricks). Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards, Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability, Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development, Requirements. Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e-g. Mathematics, Statistics, Engineering), 4+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e-g. Professional Cloud Developer, Professional Cloud Database Engineer), Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows, Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion, Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer, View our privacy policy, including our privacy notice to California residents here: https://www,five9,/pt-pt/legal, Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9, Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data bricks job market in India is flourishing, with a high demand for professionals skilled in data bricks technology. Companies across various industries are leveraging data bricks to manage and analyze their data effectively. Job seekers with expertise in data bricks can explore a multitude of exciting career opportunities in India.
Here are the top 5 major cities actively hiring for data bricks roles in India: - Bangalore - Pune - Hyderabad - Chennai - Mumbai
The average salary range for data bricks professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from INR 4-6 lakhs per annum, while experienced professionals can earn up to INR 15-20 lakhs per annum.
A typical career progression in data bricks may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Engineer, Data Architect, or Data Scientist.
In addition to expertise in data bricks, professionals in this field are often expected to have skills in: - Apache Spark - Python - SQL - Data warehousing - Data visualization tools
As you embark on your journey to explore data bricks jobs in India, remember to equip yourself with the necessary skills and knowledge to stand out in the competitive job market. Prepare diligently, showcase your expertise confidently, and seize the exciting opportunities that await you in the realm of data bricks. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane