Jobs
Interviews

1414 Data Bricks Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 - 14.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Develop and maintain a metadata driven generic ETL framework for automating ETL code Design, build, and optimize ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS . Ingest data from a variety of structured and unstructured sources (APIs, RDBMS, flat files, streaming). Develop and maintain robust data pipelines for batch and streaming data using Delta Lake and Spark Structured Streaming. Implement data quality checks, validations, and logging mechanisms. Optimize pipeline performance, cost, and reliability. Collaborate with data analysts, BI, and business teams to deliver fit for purpose datasets. Support data modeling efforts (star, snowflake schemas) de norm tables approach and assist with data warehousing initiatives. Work with orchestration tools Databricks Workflows to schedule and monitor pipelines. Follow best practices for version control, CI/CD, and collaborative development Skills Hands-on experience in ETL/Data Engineering roles. Strong expertise in Databricks (PySpark, SQL, Delta Lake), Databricks Data Engineer Certification preferred Experience with Spark optimization, partitioning, caching, and handling large-scale datasets. Proficiency in SQL and scripting in Python or Scala. Solid understanding of data lakehouse/medallion architectures and modern data platforms. Experience working with cloud storage systems like AWS S3 Familiarity with DevOps practices Git, CI/CD, Terraform, etc. Strong debugging, troubleshooting, and performance-tuning skills.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : B E or B Tech Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will play a crucial role in developing solutions that align with the organization's needs and goals. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Collaborate with cross-functional teams to analyze, design, and develop new features.- Implement and maintain software applications to meet client needs.- Troubleshoot and debug applications to enhance performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of cloud-based data analytics platforms.- Experience in developing and deploying scalable applications.- Knowledge of data modeling and database design principles. Additional Information:- The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A B E or B Tech degree is required. Qualification B E or B Tech

Posted 3 weeks ago

Apply

9.0 - 14.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Kafka Data Engineer Data Engineer to build and manage data pipelines that support batch and streaming data solutions. The role requires expertise in creating seamless data flows across platforms like Data Lake/Lakehouse in Cloudera, Azure Databricks, Kafka for both batch and stream data pipelines etc. Responsibilities Strong experience in develop, test, and maintain data pipelines (batch & stream) using Cloudera, Spark, Kafka and Azure services like ADF, Cosmos DB, Databricks, NoSQL DB/ Mongo DB etc. Strong programming skills in spark, python or scala & SQL. Optimize data pipelines to improve speed, performance, and reliability, ensuring that data is available for data consumers as required. Create ETL pipelines for downstream consumers by transform data as per business logic. Work closely with Data Architects and Data Analysts to align data solutions with business needs and ensure the accuracy and accessibility of data. Implement data validation checks and error handling processes to maintain high data quality and consistency across data pipelines. Strong analytical and problem solving skills, with a focus on optimizing data flows and addressing impacts in the data pipeline. Qualifications 8+ years of IT experience with at least 5+ years in data engineering and cloud-based data platforms. Strong experience with Cloudera/any Data Lake, Confluent/Apache Kafka, and Azure Data Services (ADF, Databricks, Cosmos DB). Deep knowledge of NoSQL databases (Cosmos DB, MongoDB) and data modeling for performance and scalability. Proven expertise in designing and implementing batch and streaming data pipelines using Databricks, Spark, or Kafka. Experience in creating scalable, reliable, and high-performance data solutions with robust data governance policies. Strong collaboration skills to work with stakeholders, mentor junior Data Engineers, and translate business needs into actionable solutions. Bachelors or masters degree in computer science, IT, or a related field.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Pune

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : B E or B Tech Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will work on developing solutions to enhance business operations and streamline processes. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Collaborate with cross-functional teams to analyze business requirements and translate them into technical solutions.- Develop and implement software solutions to meet business needs.- Conduct code reviews and ensure code quality and best practices are followed.- Troubleshoot and debug applications to optimize performance and enhance user experience.- Stay updated with the latest technologies and trends in application development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data processing and analytics.- Experience with cloud-based data platforms like AWS or Azure.- Knowledge of programming languages such as Python, Java, or Scala.- Hands-on experience in building and deploying applications using Databricks platform. Additional Information:- The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A B E or B Tech degree is required. Qualification B E or B Tech

Posted 3 weeks ago

Apply

8.0 - 13.0 years

5 - 9 Lacs

Hyderabad

Work from Office

1. Data Engineer Azure Data Services 2. Data Modelling NO SQL and SQL 3. Good understanding of Spark, Spark stream 4. Hands on with Python / Pandas / Data Factory / Cosmos DB / Data Bricks / Event Hubs / Stream Analytics 5. Knowledge of medallion architecture, data vaults, data marts etc. 6. Preferably Azure Data associate exam certified.

Posted 3 weeks ago

Apply

7.0 - 9.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Preferred Proficiency with statistical analysis tools (e.g., SAS, SPSS, R & Python) Hands on experience in Statistical Techniques & Deep Learning such as Decision Tree, Segmentation, Logistic and Multiple Regression, and others Experience of Forecasting, Campaign Management andCustomer Segmentation is desirable Strong Excel, Access and PowerPoint skills Familiarity with database query tools (SQL) Basic understanding of data warehouse architecture Understanding of various Analytical Tools, Platform & Frameworks. Technical and Professional : Data science . machine learning Good to have PySpark/Databricks Preferred Skills: Technology-Data Science-Machine Learning

Posted 3 weeks ago

Apply

12.0 - 17.0 years

13 - 18 Lacs

Hyderabad

Work from Office

1. Data Engineer Azure Data Services 2. Data Modelling NO SQL and SQL 3. Good understanding of Spark, park stream 4. Hands on with Python Pandas / Data Factory / Cosmos DB / Data Bricks / Event Hubs / Stream Analytics 5. Knowledge of medallion architecture, data vaults, data marts etc. 6. Preferably Azure Data associate exam certified.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Kolkata

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement software solutions to meet business requirements.- Collaborate with cross-functional teams to analyze and address technical issues.- Conduct code reviews and provide feedback to enhance application performance.- Stay updated on emerging technologies and trends in application development.- Assist in troubleshooting and resolving application-related issues. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data analytics and data processing techniques.- Experience with cloud-based data platforms like AWS or Azure.- Knowledge of programming languages such as Python, Java, or Scala.- Hands-on experience in developing and deploying applications using Databricks platform. Additional Information:- The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Kolkata office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

3.0 - 5.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Responsible for successful delivery of MLOps solutions and services in client consulting environments; Define key business problems to be solved; formulate high level solution approaches and identify data to solve those problems, develop, analyze/draw conclusions and present to client. Assist clients with operationalization metrics to track performance of ML Models Agile trained to manage team effort and track through JIRA High Impact Communication- Assesses the target audience need, prepares and practices a logical flow, answers audience questions appropriately and sticks to timeline. Additional Responsibilities: Master’s degree in Computer Science Engineering, with Relevant experience in the field of MLOps / Cloud Domain experience in Capital Markets, Banking, Risk and Compliance etc. Exposure to US/ overseas markets is preferred Azure Certified – DP100, AZ/AI900 Domain / Technical / Tools KnowledgeObject oriented programming, coding standards, architecture & design patterns, Config management, Package Management, Logging, documentation Experience in Test Driven Development and experience in using Pytest frameworks, git version control, Rest APIs Azure ML best practices in environment management, run time configurations (Azure ML & Databricks clusters), alerts. Experience designing and implementing ML Systems & pipelines, MLOps practices Exposure to event driven orchestration, Online Model deployment Contribute towards establishing best practices in MLOps Systems development Proficiency with data analysis tools (e.g., SQL, R & Python) High level understanding of database concepts/reporting & Data Science concepts Hands on experience in working with client IT/Business teams in gathering business requirement and converting into requirement for development team Experience in managing client relationship and developing business cases for opportunities Azure AZ-900 Certification with Azure Architecture understanding is a plus Technical and Professional : Technical knowledge- has expertise in cloud technologies, specifically MS Azure, and services with hands on coding to – Python Programming - Expert and Experienced - 4 -5 years DevOps Working knowledge with implementation experience - 1 or 2 projects a minimum Hands-On MS Azure Cloud knowledge Understand and take requirements on Operationalization of ML Models from Data Scientist Help team with ML Pipelines from creation to execution List Azure services required for deployment, Azure Data bricks and Azure DevOps Setup Assist team to coding standards (flake8 etc) Guide team to debug on issues with pipeline failures Engage with Business / Stakeholders with status update on progress of development and issue fix Automation, Technology and Process Improvement for the deployed projects Setup Standards related to Coding, Pipelines and Documentation Adhere to KPI / SLA for Pipeline Run, Execution Research on new topics, services and enhancements in Cloud Technologies Preferred Skills: Technology-Machine Learning-Python

Posted 3 weeks ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Design, develop, and maintain data pipelines and ETL processes using Databricks. Manage and optimize data solutions on cloud platforms such as Azure and AWS. Implement big data processing workflows using PySpark. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Ensure data quality and integrity through rigorous testing and validation. Optimize and tune big data solutions for performance and scalability. Stay updated with the latest industry trends and technologies in big data and cloud computing. : Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Big Data Engineer or similar role. Strong proficiency in Databricks and cloud platforms (Azure/AWS). Expertise in PySpark and big data processing. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud services and infrastructure. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with other big data technologies and frameworks. Knowledge of machine learning frameworks and libraries.

Posted 3 weeks ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Pune

Work from Office

1. ETLHands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica 2. Big DataExperience of big data platforms such as Hadoop, Hive or Snowflake for data storage and processing 3. Data Warehousing & Database ManagementUnderstanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design 4. Data Modeling & DesignGood exposure to data modeling techniques; design, optimization and maintenance of data models and data structures 5. LanguagesProficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala 6. DevOpsExposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Ab InitioExperience developing CoOp graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, ExpressIT, Data Profiler and ConductIT, ControlCenter, ContinuousFlows CloudGood exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & ControlsExposure to data validation, cleansing, enrichment and data controls ContainerizationFair understanding of containerization platforms like Docker, Kubernetes File FormatsExposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta OthersBasics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Ahmedabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Analytics Services Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : BE Summary :As an Application Lead for Packaged Application Development, you will be responsible for designing, building, and configuring applications using Microsoft Azure Analytics Services. Your typical day will involve leading the effort to deliver high-quality applications, acting as the primary point of contact for the project team, and ensuring timely delivery of project milestones. Roles & Responsibilities:- Lead the effort to design, build, and configure applications using Microsoft Azure Analytics Services.- Act as the primary point of contact for the project team, ensuring timely delivery of project milestones.- Collaborate with cross-functional teams to ensure the successful delivery of high-quality applications.- Provide technical guidance and mentorship to team members, ensuring adherence to best practices and standards. Professional & Technical Skills: - Must To Have Skills: Strong experience with Microsoft Azure Analytics Services.- Good To Have Skills: Experience with other Azure services such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.- Experience in designing, building, and configuring applications using Microsoft Azure Analytics Services.- Must have databricks and pyspark Skills.- Strong understanding of data warehousing concepts and best practices.- Experience with ETL processes and tools such as SSIS or Azure Data Factory.- Experience with SQL and NoSQL databases.- Experience with Agile development methodologies. Additional Information:- The candidate should have a minimum of 5 years of experience in Microsoft Azure Analytics Services.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality applications.- This position is based at our Bengaluru office. Qualification BE

Posted 3 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Pune

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform, Engineering Expertise, Data Lakehouse Development Good to have skills : Google BigQueryMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Your role will be pivotal in shaping the direction of application projects and ensuring that they meet the needs of the organization and its clients. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and implement necessary adjustments to ensure timely delivery. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Google BigQuery.- Strong understanding of data integration and ETL processes.- Experience with cloud-based data solutions and architecture.- Familiarity with data governance and compliance standards. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business AgilityMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities:- Need Databricks resource with Azure cloud experience- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with data architects and analysts to design scalable data solutions.- Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Business Agility.- Strong understanding of data modeling and database design principles.- Experience with data integration tools and ETL processes.- Familiarity with cloud platforms and services related to data storage and processing. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Mumbai

Work from Office

The leader must demonstrate an ability to anticipate, understand, and act on evolving customer needs, both stated and unstated. Through this, the candidate must create a customer-centric organization and use innovative thinking frameworks to foster value-added relations. With the right balance of bold initiatives, continuous improvement, and governance, the leader must adhere to the delivery standards set by the client and eClerx by leveraging the knowledge of market drivers and competition to effectively anticipate trends and opportunities. Besides, the leader must demonstrate a capacity to transform, align, and energize organization resources, and take appropriate risks to lead the organization in a new direction. As a leader, the candidate must build engaged and high-impact direct, virtual, and cross-functional teams, and take the lead towards raising the performance bar, build capability and bring out the best in their teams. By collaborating and forging partnerships both within and outside the functional area, the leader must work towards a shared vision and achieve positive business outcomes. Associate Program Manager Role and responsibilities: Represent eClerx in client pitches and external forums. Own platform and expertise through various COE activities and content generation to promote practice and business development. Lead continuous research and assessments to explore best and latest platforms, approaches, and methodologies. Contribute to developing the practice area through best practices, ideas, and Point of Views in the form of white papers and micro articles. Lead/partner in multi-discipline assessments and workshops at client sites to identify new opportunities. Lead key projects and provide development/technical leadership to junior resources. Drive solution design and build to ensure scalability, performance, and reuse. Design robust data architectures, considering performance, data quality, scalability, and data latency requirements. Recommend and drive consensus around preferred data integration and platform approaches, including Azure and Snowflake. Anticipate data bottlenecks (latency, quality, speed) and recommend appropriate remediation strategies. This is a hands-on position with a significant development component, and the ideal candidate is expected to lead the technical development and delivery of highly visible and strategic projects. Technical and Functional skills: Bachelor's Degree with at least 2-3 large-scale Cloud implementations within Retail, Manufacturing, or Technology industries. 10+ years of overall experience with data management and cloud engineering. Expertise in Azure Cloud, Azure Data Lake, Databricks, Snowflake, Teradata, and compatible ETL technologies. Strong attention to detail and ability to collaborate with multiple parties, including analysts, data subject matter experts, external labs, etc.

Posted 3 weeks ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Mumbai

Work from Office

The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Senior Process Manager Roles and responsibilities: We are seeking a talented and motivated Data Engineer to join our dynamic team. The ideal candidate will have a deep understanding of data integration processes and experience in developing and managing data pipelines using Python, SQL, and PySpark within Databricks. You will be responsible for designing robust backend solutions, implementing CI/CD processes, and ensuring data quality and consistency. Data Pipeline Development: Using Data bricks features to explore raw datasets and understand their structure. Creating and optimizing Spark-based workflows. Create end-to-end data processing pipelines, including ingesting raw data, transforming it, and running analyses on the processed data. Create and maintain data pipelines using Python and SQL. Solution Design and Architecture: Design and architect backend solutions for data integration, ensuring they are robust, scalable, and aligned with business requirements. Implement data processing pipelines using various technologies, including cloud platforms, big data tools, and streaming frameworks. Automation and Scheduling: Automate data integration processes and schedule jobs on servers to ensure seamless data flow. Data Quality and Monitoring: Develop and implement data quality checks and monitoring systems to ensure data accuracy and consistency. CI/CD Implementation: Use Jenkins and Bit bucket to create and maintain metadata and job files. Implement continuous integration and continuous deployment (CI/CD) processes in both development and production environments to deploy data pipelines efficiently. Collaboration and Documentation: Work effectively with cross-functional teams, including software engineers, data scientists, and DevOps, to ensure successful project delivery. Document data pipelines and architecture to ensure knowledge transfer and maintainability. Participate in stakeholder interviews, workshops, and design reviews to define data models, pipelines, and workflows. Technical and Functional Skills: Education and Experience: Bachelors Degree with 7+ years of experience, including at least 3+ years of hands-on experience in SQL/ and Python. Technical Proficiency: Proficiency in writing and optimizing SQL queries in MySQL and SQL Server. Expertise in Python for writing reusable components and enhancing existing ETL scripts. Solid understanding of ETL concepts and data pipeline architecture, including CDC, incremental loads, and slowly changing dimensions (SCDs). Hands-on experience with PySpark. Knowledge and experience with using Data bricks will be a bonus. Familiarity with data warehousing solutions and ETL processes. Understanding of data architecture and backend solution design. Cloud and CI/CD Experience: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with Jenkins and Bit bucket for CI/CD processes. Additional Skills: Ability to work independently and manage multiple projects simultaneously.

Posted 3 weeks ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Mumbai

Work from Office

The leader must demonstrate an ability to anticipate, understand, and act on evolving customer needs, both stated and unstated. Through this, the candidate must create a customer-centric organization and use innovative thinking frameworks to foster value-added relations. With the right balance of bold initiatives, continuous improvement, and governance, the leader must adhere to the delivery standards set by the client and eClerx by leveraging the knowledge of market drivers and competition to effectively anticipate trends and opportunities. Besides, the leader must demonstrate a capacity to transform, align, and energize organization resources, and take appropriate risks to lead the organization in a new direction. As a leader, the candidate must build engaged and high-impact direct, virtual, and cross-functional teams, and take the lead towards raising the performance bar, build capability and bring out the best in their teams. By collaborating and forging partnerships both within and outside the functional area, the leader must work towards a shared vision and achieve positive business outcomes. Associate Program Manager Role and responsibilities: Represent eClerx in client pitches, external forums, and COE (Center of Excellence) activities to promote cloud engineering expertise. Lead research, assessments, and development of best practices to keep our cloud engineering solutions at the forefront of technology. Contribute to the growth of the cloud engineering practice through thought leadership, including the creation of white papers and articles. Lead and collaborate on multi-discipline assessments at client sites to identify new cloud-based opportunities. Provide technical leadership in the design and development of robust, scalable cloud architectures. Drive key cloud engineering projects, ensuring high performance, scalability, and adherence to best practices. Design and implement data architectures that address performance, scalability, and data latency requirements. Lead the development of cloud-based solutions, ensuring they are scalable, robust, and aligned with business needs. Anticipate and mitigate data bottlenecks, proposing strategies to enhance data processing efficiency. Provide mentorship and technical guidance to junior team members. Technical and Functional skills: Bachelors with 10+ years of experience in data management and cloud engineering. Proven experience in at least 2-3 large-scale cloud implementations within industries such as Retail, Manufacturing, or Technology. Expertise in Azure Cloud, Azure Data Lake, Databricks, Teradata, and ETL technologies. Strong problem-solving skills with a focus on performance optimization and data quality. Ability to collaborate effectively with analysts, subject matter experts, and external partners.

Posted 3 weeks ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Hyderabad

Hybrid

Immediate Openings on ITSS- Senior Azure Developer / Data Engineer _ Bangalore_Contract Experience: 5+ Years Skill: ITSS- Senior Azure Developer / Data Engineer Location: Bangalore Notice Period: Immediate . Employment Type: Contract Working Mode : Hybrid Job Description Description - Senior Azure Developer Loc - Bangalore , Hyd, Chennai and Noida Role: Data Engineer Experience: Mid-Level Primary Skillsets: Azure (ADF/ADLS/Key vaults) Secondary Skillsets: Databricks Good to have Skillsets: Ability to communicate well Experience in cloud applications, especially in Azure (basically things covered in Primary skillsets listed below) Work in agile framework Experience in ETL, SQL, PySpark Run with a task without waiting for direction all the time Experience with git repository and release pipelines. Any certifications on Azure Any certification on Databricks will be a topping on a cake

Posted 3 weeks ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Experience: 5+ Years Skill : Power BI Developer Location: Pan India Notice Period: Immediate . Employment Type: Contract Job Description Power BI Developer Top 3 skills Excellent expertise in DAX queries and functions in Power BIDAX Experience in tools and systems on MS SQL Server BI Stack, including SSRS, Power Query, PowerBI Experience working with data from cloud data sources. Experience in ADO , Databricks

Posted 3 weeks ago

Apply

6.0 - 11.0 years

4 - 7 Lacs

Gurugram

Work from Office

At least 6-8 yrs of experience in ETL Testing with Automation Testing Expert in database testing using SQL. Must have worked on Databricks and aware of Databricks related concepts Check the data source locations and formats, perform a data count, and verify that the columns and data types meet the requirements. Test the accuracy of the data, and its completeness. Identify key ETL mapping scenarios and create SQL queries that simulate the scenarios. Should be able to develop and execute test plans, test cases, test scripts. Experience in writing complex SQL queries and validation of Enterprise Data Warehouse Applications Understanding of data model, ETL architecture, Data Warehouse concepts. Must have worked on Agile Methodology Good to have exposure to pyspark.

Posted 3 weeks ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Gurugram

Work from Office

At least 6-8 yrs of experience in ETL Testing with Automation Testing Expert in database testing using SQL. Must have worked on Databricks and aware of Databricks related concepts Check the data source locations and formats, perform a data count, and verify that the columns and data types meet the requirements. Test the accuracy of the data, and its completeness. Identify key ETL mapping scenarios and create SQL queries that simulate the scenarios. Should be able to develop and execute test plans, test cases, test scripts. Experience in writing complex SQL queries and validation of Enterprise Data Warehouse Applications Understanding of data model, ETL architecture, Data Warehouse concepts. Must have worked on Agile Methodology Good to have exposure to pyspark

Posted 3 weeks ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Bengaluru

Work from Office

Develop and optimize data pipelines using Databricks and PySpark. Process large-scale data for analytics and reporting. Implement best practices for ETL and data warehousing.

Posted 3 weeks ago

Apply

10.0 - 17.0 years

12 - 22 Lacs

Gurugram

Work from Office

We know the importance that food plays in people's lives the power it has to bring people, families and communities together. Our purpose is to bring enjoyment to people’s lives through great tasting food, in a way which reflects our values. McCain has recently committed to implementing regenerative agriculture practices across 100 percent of our potato acreage by 2030. Ask us more about our commitment to sustainability. OVERVIEW McCain is embarking on a digital transformation. As part of this transformation, we are making significant investments into our data platforms, common data models. data structures and data policies to increase the quality of our data, the confidence of our business teams to use this data to make better decisions and drive value through the use of data. We have a new and ambitious global Digital & Data group, which serves as a resource to the business teams in our regions and global functions. We are currently recruiting an experienced Data Architect to build enterprise data model McCain. JOB PURPOSE: Reporting to the Data Architect Lead, Global Data Architect will take a lead role in creating the enterprise data model for McCain Foods, bringing together data assets across agriculture, manufacturing, supply chain and commercial. This data model will be the foundation for our analytics program that seeks to bring together McCain’s industry-leading operational data sets, with 3rd party data sets, to drive world-class analytics. Working with a diverse team of data governance experts, data integration architects, data engineers and our analytics team including data scientist, you will play a key role in creating a conceptual, logical and physical data model that underpins the Global Digital & Data team’s activities. . JOB RESPONSIBILITIES: Develop an understanding of McCain’s key data assets and work with data governance team to document key data sets in our enterprise data catalog Work with business stakeholders to build a conceptual business model by understanding the business end to end process, challenges, and future business plans. Collaborate with application architects to bring in the analytics point of view when designing end user applications. Develop Logical data model based on business model and align with business teams Work with technical teams to build physical data model, data lineage and keep all relevant documentations Develop a process to manage to all models and appropriate controls With a use-case driven approach, enhance and expand enterprise data model based on legacy on-premises analytics products, and new cloud data products including advanced analytics models Design key enterprise conformed dimensions and ensure understanding across data engineering teams (including third parties); keep data catalog and wiki tools current Primary point of contact for new Digital and IT programs, to ensure alignment to enterprise data model Be a clear player in shaping McCain’s cloud migration strategy, enabling advanced analytics and world-leading Business Intelligence analytics Work in close collaboration with data engineers ensuring data modeling best practices are followed MEASURES OF SUCCESS: Demonstrated history of driving change in a large, global organization A true passion for well-structured and well-governed data; you know and can explain to others the real business risk of too many mapping tables You live for a well-designed and well-structured conformed dimension table Focus on use-case driven prioritization; you are comfortable pushing business teams for requirements that connect to business value and also able to challenge requirements that will not achieve the business’ goals Developing data models that are not just elegant, but truly optimized for analytics, both advanced analytics use cases and dashboarding / BI tools A coaching mindset wherever you go, including with the business, data engineers and other architects A infectious enthusiasm for learning: about our business, deepening your technical knowledge and meeting our teams Have a “get things done” attitude. Roll up the sleeves when necessary; work with and through others as needed KEY QUALIFICATION & EXPERIENCES: Data Design and Governance At least 5 years of experience with data modeling to support business process Ability to design complex data models to connect and internal and external data Nice to have: Ability profile the data for data quality requirements At least 8 years of experience with requirement analysis; experience working with business stakeholders on data design Experience on working with real-time data. Nice to have: experience with Data Catalog tools Ability to draft accurate documentation that supports the project management effort and coding Technical skills At least 5 years of experience designing and working in Data Warehouse solutions building data models; preference for having S4 hana knowledge. At least 2 years of experience in visualization tools preferably Power BI or similar tools.e At least 2 years designing and working in Cloud Data Warehouse solutions; preference for Azure Databricks, Azure Synapse or earlier Microsoft solutions Experience Visio, Power Designer, or similar data modeling tools Nice to have: Experience in data profiling tools informatica, Collibra or similar data quality tools Nice to have: Working experience on MDx Experience in working in Azure cloud environment or similar cloud environment Must have : Ability to develop queries in SQL for assessing , manipulating, and accessing data stored in relational databases , hands on experience in PySpark, Python Nice to have: Ability to understand and work with unstructured data Nice to have at least 1 successful enterprise-wide cloud migration being the data architect or data modeler. - mainly focused on building data models. Nice to have: Experience on working with Manufacturing /Digital Manufacturing. Nice to have: experience designing enterprise data models for analytics, specifically in a PowerBI environment Nice to have: experience with machine learning model design (Python preferred) Behaviors and Attitudes Comfortable working with ambiguity and defining a way forward. Experience challenging current ways of working A documented history of successfully driving projects to completion Excellent interpersonal skills Attention to the details. Good interpersonal and communication skills Comfortable leading others through change

Posted 3 weeks ago

Apply

5.0 - 8.0 years

16 - 30 Lacs

Kolkata

Hybrid

Data Modeler Hybrid Data Environments Job Summary: We are in search of an experienced Data Modeler who possesses a deep understanding of traditional data stores such as SQL Server and Oracle DB, as well as proficiency in Azure/Databricks cloud environments. The ideal candidate will be adept at comprehending business processes and deriving methods to define analytical data models that support enterprise-level analytics, insights generation, and operational reporting. Key Responsibilities: - Collaborate with business analysts and stakeholders to understand business processes and requirements, translating them into data modeling solutions. - Design and develop logical and physical data models that effectively capture the granularity of data necessary for analytical and reporting purposes. - Migrate and optimize existing data models from traditional on-premises data stores to Azure/Databricks cloud environments, ensuring scalability and performance. - Establish data modeling standards and best practices to maintain the integrity and consistency of the data architecture. - Work closely with data engineers and BI developers to ensure that the data models support the needs of analytical and operational reporting. - Conduct data profiling and analysis to understand data sources, relationships, and quality, informing the data modeling process. - Continuously evaluate and refine data models to accommodate evolving business needs and to leverage new data modeling techniques and cloud capabilities. - Document data models, including entity-relationship diagrams, data dictionaries, and metadata, to provide clear guidance for development and maintenance. - Provide expertise in data modeling and data architecture to support the development of data governance policies and procedures. Qualifications: - Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field. - Minimum of 5 years of experience in data modeling, with a strong background in both traditional RDBMS and modern cloud-based data platforms. - Proficiency in SQL and experience with data modelling tools (e.g., ER/Studio, ERwin, PowerDesigner). - Familiarity with Azure cloud services, Databricks, and other big data technologies. - Understanding of data warehousing concepts, including dimensional modeling, star schemas, and snowflake schemas. - Ability to translate complex business requirements into effective data models that support analytical and reporting functions. - Strong analytical skills and attention to detail. - Excellent communication and collaboration abilities, with the capacity to engage with both technical and non-technical stakeholders.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

14 - 19 Lacs

Coimbatore

Work from Office

We are seeking a Senior Data & AI/ML Engineer with deep expertise in GCP, who will not only build intelligent and scalable data solutions but also champion our internal capability building and partner-level excellence.. This is a high-impact role for a seasoned engineer who thrives in designing GCP-native AI/ML-enabled data platforms. You'll play a dual role as a hands-on technical lead and a strategic enabler, helping drive our Google Cloud Data & AI/ML specialization track forward through successful implementations, reusable assets, and internal skill development.. Preferred Qualification. GCP Professional Certifications: Data Engineer or Machine Learning Engineer.. Experience contributing to a GCP Partner specialization journey.. Familiarity with Looker, Data Catalog, Dataform, or other GCP data ecosystem tools.. Knowledge of data privacy, model explainability, and AI governance is a plus.. Work Location: Remote. Key Responsibilities. Data & AI/ML Architecture. Design and implement data architectures for real-time and batch pipelines, leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI, and Cloud Storage.. Lead the development of ML pipelines, from feature engineering to model training and deployment using Vertex AI, AI Platform, and Kubeflow Pipelines.. Collaborate with data scientists to operationalize ML models and support MLOps practices using Cloud Functions, CI/CD, and Model Registry.. Define and implement data governance, lineage, monitoring, and quality frameworks.. Google Cloud Partner Enablement. Build and document GCP-native solutions and architectures that can be used for case studies and specialization submissions.. Lead client-facing PoCs or MVPs to showcase AI/ML capabilities using GCP.. Contribute to building repeatable solution accelerators in Data & AI/ML.. Work with the leadership team to align with Google Cloud Partner Program metrics.. Team Development. Mentor engineers and data scientists toward achieving GCP certifications, especially in Data Engineering and Machine Learning.. Organize and lead internal GCP AI/ML enablement sessions.. Represent the company in Google partner ecosystem events, tech talks, and joint GTM engagements.. What We Offer. Best-in-class packages.. Paid holidays and flexible time-off policies.. Casual dress code and a flexible working environment.. Opportunities for professional development in an engaging, fast-paced environment.. Medical insurance covering self and family up to 4 lakhs per person.. Diverse and multicultural work environment..

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies