Home
Jobs

995 Data Bricks Jobs - Page 7

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. You will collaborate with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Your typical day will involve working on the data platform blueprint and design, collaborating with architects, and ensuring seamless integration between systems and data models. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist with the data platform blueprint and design.- Collaborate with Integration Architects and Data Architects.- Ensure cohesive integration between systems and data models.- Implement data platform components.- Troubleshoot and resolve data platform issues. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 4 days ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Chennai

Work from Office

Naukri logo

Design, build, and maintain scalable data pipelines in Databricks. Collaborate with cross-functional teams to gather and interpret data requirements. Develop data models and perform data analysis using SQL and Python. Implement and optimize ETL processes for data ingestion and transformation. Monitor and troubleshoot data pipeline issues to ensure data integrity and availability. Create data visualizations and dashboards to present findings to stakeholders. Stay up-to-date with the latest features and best practices in Databricks and big data technologies. Requirements Proficiency in Databricks and experience with Apache Spark. Strong knowledge of SQL and Python programming languages. Experience in data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms like AWS, Azure, or Google Cloud. Ability to handle large datasets and perform data analysis efficiently. Excellent problem-solving and analytical skills. Strong communication skills and a collaborative mindset.

Posted 4 days ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :Data Engineering Sr. Advisor demonstrates expertise in data engineering technologies with the focus on engineering, innovation, strategic influence and product mindset. This individual will act as key contributor of the team to design, build, test and deliver large-scale software applications, systems, platforms, services or technologies in the data engineering space. This individual will have the opportunity to work directly with partner IT and business teams, owning and driving major deliverables across all aspects of software delivery.The candidate will play a key role in automating the processes on Databricks and AWS. They collaborate with business and technology partners in gathering requirements, develop and implement. The individual must have strong analytical and technical skills coupled with the ability to positively influence on delivery of data engineering products. The applicant will be working in a team that demands innovation, cloud-first, self-service-first, and automation-first mindset coupled with technical excellence. The applicant will be working with internal and external stakeholders and customers for building solutions as part of Enterprise Data Engineering and will need to demonstrate very strong technical and communication skills.Delivery Intermediate delivery skills including the ability to deliver work at a steady, predictable pace to achieve commitments, decompose work assignments into small batch releases and contribute to tradeoff and negotiation discussions.Domain Expertise Demonstrated track record of domain expertise including the ability to understand technical concepts necessary to do the job effectively, demonstrate willingness, cooperation, and concern for business issues and possess in-depth knowledge of immediate systems worked on.Problem Solving Proven problem solving skills including debugging skills, allowing you to determine source of issues in unfamiliar code or systems and the ability to recognize and solve repetitive problems rather than working around them, recognize mistakes using them as learning opportunities and break down large problems into smaller, more manageable ones& Responsibilities:The candidate will be responsible to deliver business needs end to end from requirements to development into production.Through hands-on engineering approach in Databricks environment, this individual will deliver data engineering toolchains, platform capabilities and reusable patterns.The applicant will be responsible to follow software engineering best practices with an automation first approach and continuous learning and improvement mindset.The applicant will ensure adherence to enterprise architecture direction and architectural standards.The applicant should be able to collaborate in a high-performing team environment, and an ability to influence and be influenced by others.Experience Required:More than 12 years of experience in software engineering, building data engineering pipelines, middleware and API development and automationMore than 3 years of experience in Databricks within an AWS environmentData Engineering experienceExperience Desired:Expertise in Agile software development principles and patternsExpertise in building streaming, batch and event-driven architectures and data pipelinesPrimary Skills: Cloud-based security principles and protocols like OAuth2, JWT, data encryption, hashing data, secret management, etc.Expertise in Big data technologies such as Spark, Hadoop, Databricks, Snowflake, EMR, GlueGood understanding of Kafka, Kafka Streams, Spark Structured streaming, configuration-driven data transformation and curationExpertise in building cloud-native microservices, containers, Kubernetes and platform-as-a-service technologies such as OpenShift, CloudFoundryExperience in multi-cloud software-as-a-service products such as Databricks, SnowflakeExperience in Infrastructure-as-Code (IaC) tools such as terraform and AWS cloudformationExperience in messaging systems such as Apache ActiveMQ, WebSphere MQ, Apache Artemis, Kafka, AWS SNSExperience in API and microservices stack such as Spring Boot, Quarkus, Expertise in Cloud technologies such as AWS Glue, Lambda, S3, Elastic Search, API Gateway, CloudFrontExperience with one or more of the following programming and scripting languages Python, Scala, JVM-based languages, or JavaScript, and ability to pick up new languagesExperience in building CI/CD pipelines using Jenkins, Github ActionsStrong expertise with source code management and its best practicesProficient in self-testing of applications, unit testing and use of mock frameworks, test-driven development (TDD)Knowledge on Behavioral Driven Development (BDD) approachAdditional Skills: Ability to perform detailed analysis of business problems and technical environmentsStrong oral and written communication skillsAbility to think strategically, implement iteratively and estimate financial impact of design/architecture alternativesContinuous focus on an on-going learning and development Qualification 15 years full time education

Posted 4 days ago

Apply

3.0 - 8.0 years

5 - 8 Lacs

Kolkata

Work from Office

Naukri logo

We are looking for a Senior Python Developer with a passion for AI research and API development to join our growing team. In this role, you will be responsible for building scalable, high-performance APIs and contributing to AI/ML research and implementation. You will work closely with data scientists, researchers, and product teams to design and deploy intelligent systems that power our next-generation applications. Key Responsibilities Design, develop, and maintain Python-based APIs for AI/ML models and services Collaborate with AI researchers to implement and optimize machine learning models Conduct research into new AI/ML techniques and evaluate their applicability to business problems Build RESTful and GraphQL APIs using frameworks like FastAPI , Flask , or Django REST Framework Write clean, testable, and maintainable Python code with a focus on performance and scalability Participate in code reviews , mentor junior developers, and contribute to best practices Integrate AI models with backend systems and frontend applications Stay up-to-date with AI/ML trends , Python libraries (e.g., PyTorch , TensorFlow , Scikit-learn ), and API design patterns Work in an agile environment , delivering high-quality software in iterative sprints Qualifications Bachelors or Masters degree in Computer Science, Data Science, or a related field 4 + years of professional experience in software development, with 3 + years in Python Strong experience with Python web frameworks (e.g., FastAPI, Flask, Django) What Were Looking For in a Candidate A curious mind with a passion for AI and software development A team player who can mentor and guide others A self-starter who can take initiative and deliver results A lifelong learner who stays current with emerging technologies and trends Why Join Us? Work on cutting-edge AI projects with real-world impact Collaborate with top-tier researchers and engineers Flexible work environment and remote-friendly options Competitive salary and performance-based incentives Opportunities for professional growth and leadership A culture that values innovation, collaboration, and continuous learning

Posted 4 days ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Gurugram

Remote

Naukri logo

Requirements: Bachelor's degree in Computer Science, Information Technology, or a related field. A master's degree is a plus. Proven experience as a Data Engineer or in a similar role, with a focus on ETL processes and database management. Proficiency in the Microsoft Azure data management suite (MSSQL, Azure Databricks , PowerBI , Data factories, Azure cloud monitoring, etc.) and Python scripting. Strong knowledge of SQL and experience with database management systems Strong development skills in python and pyspark . Experience with data warehousing solutions and data mart creation. Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications: Good to have Databricks Certified data engineer associate or professional. Understanding of data modeling and data architecture principles. Experience with data governance and data security best practices.

Posted 4 days ago

Apply

8.0 - 10.0 years

20 - 25 Lacs

Pune

Work from Office

Naukri logo

Meeting with managers for company’s Big Data needs Developing big data solutions on AWS using Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, Hadoop Loading disparate data sets & conducting pre-processing services using Athena, Glue, Spark Required Candidate profile Proficient with Python & PySpark Extensive experience with Delta Tables, JSON, Parquet file format AWS data analytics services like Athena, Glue, Redshift, EMR Knowledge of NoSQL and RDBMS databases.

Posted 4 days ago

Apply

10.0 - 20.0 years

20 - 35 Lacs

Bengaluru

Remote

Naukri logo

As the data engineering consultant, you should have the common traits and capabilities that are listed Essential Requirements and meet many of the capabilities listed in Desirable Requirements Essential Requirements and Skills 10+ years working with customers in the Data Analytics, Big Data and Data Warehousing field. 10+ years working with data modeling tools. 5+ years building data pipelines for large customers. 2+ years of experience working in the field of Artificial Intelligence that leverages Big Data. This should be in a customer-facing services delivery role. 3+ years of experience in Big Data database design. A good understanding of LLMs, prompt engineering, fine tuning and training. Strong knowledge of SQL, NoSQL and Vector databases. Experience with popular enterprise databases such as SQL Server, MySQL, Postgres and Redis is a must. Additionally experience with popular Vector Databases such as PGVector, Milvus and Elasticsearch is a requirement. Experience with major data warehousing providers such as Teradata. Experience with data lake tools such as Databricks, Snowflake and Starburst. Proven experience building data pipelines and ETLs for both data transformation and multiple data source data extraction. Experience with automation of the deployment and execution of these pipelines. Experience with tools such as Apache Spark, Apache Hadoop, Informatica and similar data processing tools. Proficient knowledge of Python and SQL is a must. Proven experience with building test procedures, ensuring the quality, reliability, performance, and scalability of the data pipelines. Ability to develop applications that expose Restful APIs for data querying and ingestion. Experience preparing training data for Large Language Model ingestion and training (e.g. through vector databases). Experience with integrating with RAG solutions and leveraging related tools such as Nvidia Guardrails. Ability to define and implement metrics for RAG solutions. Understanding of typical AI tooling ecosystem including knowledge and experience of Kubernetes, MLOps, LLMOps and AIOps tools. Ability to gain customer trust, ability to plan, organize and drive customer workshops. Good communication skills in English is a must. The ability to work in a highly efficient team using an Agile methodology such as Scrum or Kanban. Ability to have extended pairing sessions with customers, enabling knowledge transfers in complex domains. Ability to influence and interact with confidence and credibility at all levels within the Dell Technologies companies and with our customers, partners, and vendors. Experience working on project teams within a defined methodology while adhering to margin, planning and SOW requirements. Ability to be onsite during customer workshops and enablement sessions. Desirable Requirements and Skills Knowledge of industry widespread AI Studios and AI Workbenches is a plus. Experience building and using Information Retrieval (IR) frameworks to support LLM inferencing. Working knowledge of Linux is a plus. Knowledge of using Minio is appreciated. Experience using Lean and Iterative Deployment Methodologies. Working knowledge of cloud technologies is a plus. University Degree aligned to Data Engineering is a plus. In possession of relevant industry certifications e.g. Databricks Certified Data Engineer, Microsoft Certifications, etc.

Posted 4 days ago

Apply

5.0 - 8.0 years

20 - 25 Lacs

Pune, Gurugram

Hybrid

Naukri logo

Key Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory, Snowflake, and DBT. Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Must-Have Skills: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good-to-Have Skills: Experience with Azure Data Lake, Azure Synapse, or Azure Functions. Familiarity with Python or PySpark for custom data transformations. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 5+ years of experience in data engineering roles using Azure and Snowflake. • Strong problem-solving, communication, and collaboration skills.

Posted 4 days ago

Apply

6.0 - 8.0 years

12 - 16 Lacs

Kolkata

Hybrid

Naukri logo

What will your day look like? - Leading a dynamic team to deliver high impact risk solutions across credit risk (underwriting, exposure controls and line management). - Work with stakeholders across product management, data science, and engineering to build relationship with the partner teams and drive implementation of risk strategies - Manage challenging time constraints to ensure on-time delivery of projects. - Work closely with partner teams in identifying, evaluating, and recommending new data that helps in risk differentiation. - Analyze loss trends and simulate risk decisioning strategies that help optimize revenue, approval rates etc. - Work closely with data science team and recommends credit risk decisioning and model deployment strategy. - Build a risk scorecard that leverages both internal performance data and external performance data that will be leveraged for credit decisioning at both underwriting and account management reviews for existing customers. - Collates analysis and builds presentations that helps articulate the risk strategy for the leadership team. To Help Us Level Up, You Will Ideally Have : - Quantitative background in engineering, statistics, math, economics, business, or related disciplines. - 5+ years experience in analyzing data and using database query language (e. SQL) analysis and programming and developer tools such as Python, R, data bricks in a finance or analytics field. - 2+ years of experience in leading high performing team of analysts. - Experience in working with non-traditional data such as social media will be a big plus. - Prior model building experience is a plus but not critical. - Possesses an analytical mindset and strong problem-solving skills. - Attention to detail and ability to multitask. - Comfortable working in a fast-paced environment and dealing with ambiguity. - Possesses strong communication, interpersonal and presentation skills; and ability to engage and collaborate with multiple stakeholders across teams. - Extremely proactive communicator willing to raise flags when needed and keep team members informed of ongoing risk or fraud related activities.

Posted 4 days ago

Apply

3.0 - 8.0 years

6 - 16 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Naukri logo

Qualification 4-5 years of good hands-on exposure with Big Data technologies Pyspark (Data frame and Spark SQL), Hadoop, and Hive Good hands-on experience of python and Bash Scripts Hands-on experience with using Cloud Platform provided Big Data technologies Good understanding of SQL and data warehouse tools like (Redshift) Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premises to cloud and cloud to cloud migrations Roles & Responsibilities Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS/Azure env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices.

Posted 5 days ago

Apply

7.0 - 12.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : BTech or Equivalent Min 15 years of education Summary :As an AI Advisor, you will be responsible for driving business outcomes for clients through analytics using Databricks Unified Data Analytics Platform. Your typical day will involve supporting delivery leads, account management, and operational excellence teams to deliver client value through analytics and industry best practices. Roles & Responsibilities:- Lead the development and deployment of advanced analytics solutions using Databricks Unified Data Analytics Platform.- Conduct detailed analysis of complex data sets, employing statistical methodologies and data munging techniques for actionable insights.- Collaborate with cross-functional teams, applying expertise in diverse analytics techniques, including experience in implementing various algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Communicate technical findings effectively to stakeholders, utilizing data visualization tools for clarity.- Stay updated with the latest advancements in analytics and data science, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: - Must To Have Skills: Experience with Databricks Unified Data Analytics Platform.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools.- Experience in implementing various analytics techniques such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform.- The ideal candidate will possess a strong educational background in statistics, mathematics, computer science, or a related field, along with a proven track record of delivering impactful data-driven solutions.- This position is based at our Bengaluru office. Qualification BTech or Equivalent Min 15 years of education

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Databricks Unified Data Analytics Platform, Apache Spark, Talend ETLMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with the team to develop and implement solutions, ensuring the applications are aligned with the business needs. You will also engage with multiple teams, contribute to key decisions, and provide problem-solving solutions for your team and across multiple teams. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Design, build, and configure applications based on business process and application requirements- Collaborate with the team to develop and implement solutions- Ensure the applications are aligned with the business needs Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark- Good To Have Skills: Experience with Talend ETL, Apache Spark, Databricks Unified Data Analytics Platform- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark- This position is based in Mumbai- A 15 years full-time education is required Qualification 15 years full time education

Posted 5 days ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Python (Programming Language), Microsoft Azure Databricks, Microsoft Azure Data ServicesMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for creating efficient and scalable applications that align with the organization's goals and objectives. Your typical day will involve collaborating with cross-functional teams, analyzing business requirements, and developing innovative solutions to meet customer needs. You will also be involved in testing, debugging, and troubleshooting applications to ensure their smooth functioning and optimal performance. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Develop and maintain high-quality software applications.- Collaborate with business analysts and stakeholders to gather and analyze requirements.- Design and implement application features and enhancements.- Perform code reviews and ensure adherence to coding standards.- Troubleshoot and debug application issues.- Optimize application performance and scalability.- Conduct unit testing and integration testing.- Document application design, functionality, and processes.- Stay updated with emerging technologies and industry trends.- Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Python (Programming Language), Microsoft Azure Databricks, Microsoft Azure Data Services.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 12 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

8.0 - 13.0 years

15 - 19 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Technology Architect Project Role Description : Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Cloud Data ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : BE or MCA Summary :As a Technology Architect, you will be responsible for reviewing and integrating all application requirements, including functional, security, integration, performance, quality, and operations requirements. Your typical day will involve reviewing and integrating technical architecture requirements, providing input into final decisions regarding hardware, network products, system software, and security. Roles & ResponsibilitiesShould have a minimum of 8 years of experience in Databricks Unified Data Analytics Platform. Good experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform Should have strong educational background in technology and information architectures, along with a proven track record of delivering impactful data-driven solutions. Strong requirement analysis and technical solutioning skill in Data and Analytics Client facing role in terms of running solution workshops, client visits, handled large RFP pursuits and managed multiple stakeholders. Technical Experience6 or more years of experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform. 2 or more years of experience using Python, PySpark or Scala. Experience in Databricks on cloud. Exp in any of AWS, Azure or GCPe, ETL, data engineering, data cleansing and insertion into a data warehouse Must have Skills like Databricks, Cloud Data Architecture, Python Programming Language, Data Engineering. Professional AttributesExcellent writing, communication and presentation skills. Eagerness to learn and develop self on an ongoing basis. Excellent client facing and interpersonal skills. Qualification BE or MCA

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : Graduate Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have Skills :Databricks Unified Data Analytics Platform, SSINON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job :Key Responsibilities :1 Show a strong development skill in Pyspark and Databrick sto build complex data pipelines 2 Should be able to deliver the development task assigned independently or with small help 3 Should be able to participate in daily status calls and have good communication skills to manage day to day work Technical Experience :1 Should have more than 5 years of experience in IT 2. Should have more than 2 years of experience in technologies like Databricks and Pyspark 3 Should be able to build end to end pipelines using Pyspark with good knowledge on Delta Lake 4 Should have good knowledge on Azure services like Azure Data Factory, Azure storage solutions like ADLS, Delta Lake, Azure AD Professional Attributes :1 Should have involved in data engineering project from requirements phase to delivery 2 Good communication skill to interact with client and understand the requirement 3 Should have capability to work independently and guide the team Educational Qualification:GraduateAdditional Info :Skill Flex for Pyspark, only Bengaluru, Should be flexible to work form Client Office Qualification Graduate

Posted 5 days ago

Apply

10.0 - 20.0 years

50 - 70 Lacs

Hyderabad

Work from Office

Naukri logo

Role & responsibilities Engineering Director role, most critical technical experiences are around scalable system design, modern data architecture, and team enablement. Experience with languages like Java is key for backend systems, while Python remains important for orchestration and analytics workloads. From a tooling standpoint, familiarity with Kubernetes, Terraform, and observability stacks (e.g. DataDog, Grafana) is essential for operational excellence. On the data side, platforms like Snowflake, Databricks, or lakehouses are important for most modern pipelines. A Director should be comfortable working and evolving data architecture decisions around these with recommendations from Architects. Additionally, privacy and security are becoming first-class concerns so experience in basic data access controls, and compliance policies (GDPR, CCPA) is a strong differentiator. Finally, the ability to mentor engineers and guide technical ad tech business strategy across teams including cross-functional stakeholders in data science, customer success, and measurement is an important characteristic in driving long-term success.

Posted 5 days ago

Apply

6.0 - 11.0 years

15 - 27 Lacs

Bengaluru

Hybrid

Naukri logo

Primary Responsibilities: Develop visual reports, dashboards and KPI scorecards using Power BI desktop. Build Analysis Services reporting models. Connect to data sources, importing data and transforming data for Business Intelligence. Implement row level security on data and understand application security layer models in Power BI. Integrate Power BI reports into other applications using embedded analytics like Power BI service (SaaS), or by API automation. Use advance level calculations on the data set. Design and develop Azure-based data centric application to manage large healthcare data application Design, build, test and deploy streaming pipelines for data processing in real time and at scale Create ETL packages Make use of Azure cloud services in ingestion and data processing Own feature development using Microsoft Azure native services like App Service, Azure Function, Azure Storage, Service Bus queues, Event Hubs, Event Grid, Application Gateway, Azure SQL, Azure DataBricks, etc Identify opportunities to fine-tune and optimize applications running on Microsoft Azure, cost reduction, adoption of best cloud practices, data and application security covering scalability and high availability Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects of Microsoft Azure Focus on automation using Infrastructure as a Code (IaaC), Jenkins, Azure DevOps, Terraform, etc. Communicate effectively with other engineers and QA Establish, refine and integrate development and test environment tools and software as needed Identify production and non-production application issues Senior Cloud Data Engineer Position with about 7+ Years of hands-on technical Experience in the Data processing, reporting and Cloud technologies. Working Knowledge of executing the projects in the Agile Methodologies. 1. Required Skills 1. Be able to envision the overall solution for defined functional and non-functional requirements; and be able to define technologies, patterns and frameworks to materialize it. 2. Design and develop the framework of the system and be able to explain choices made. Also write and review design document explaining overall architecture, framework and high level design of the application. 3. Create, understand and validate Design and estimated effort for given module/task, and be able to justify it. 4. Be able to define in-scope, out-of-scope and taken assumptions while creating effort estimates. 5. Be able to identify and integrate well over all integration points in context of a project as well as other applications in the environment. 6. Understand the business requirements and develop data models Technical Skills: 1. Strong proficiency as a Cloud Data Engineer utilizing Power BI and Azure Data Bricks to support as well as design, develop and deploy requested updates to new and existing cloud-based services. 2. Experience with developing, implementing, monitoring and troubleshooting applications in the Azure Public Cloud. 3. Proficiency in Data Modeling and reporting 4. Design and implement database schema 5. Design and development of well documented source code. 6. Development of both unit testing and system testing scripts that will be incorporated into the QA process. 7. Automating all deployment steps with Infrastructure as Code (IAC) and Jenkins Pipeline as Code (JPaC) concepts. 8. Define guidelines and benchmarks for NFR considerations during project implementation. 9. Do required POCs to make sure that suggested design/technologies meet the requirements. . Required Experience: 5+ to 10+ years of professional experience developing SQL, Power BI, SSIS and Azure Data Bricks. 5+ to 10+ years of professional experience utilizing SQL Server for data storage in large-scale .NET solutions. Strong technical writing skills. Strong knowledge of build/deployment/unit testing tools. Highly motivated team player and a self-starter. Excellent verbal, phone, and written communication skills. Knowledge of Cloud-based architecture and concepts. Required Qualifications: Graduate or Post Graduate in Computer Science /Engineering/Science/Mathematics or related field with around 10 years of experience in executing the Data Reporting solutions Cloud Certification, preferably Azure

Posted 5 days ago

Apply

8.0 - 13.0 years

15 - 25 Lacs

Kolkata

Work from Office

Naukri logo

Design and implement data architecture solutions that align with business requirements Develop and maintain data models, data dictionaries, and data flow diagrams.

Posted 5 days ago

Apply

6.0 - 11.0 years

11 - 21 Lacs

Udaipur, Jaipur, Bengaluru

Work from Office

Naukri logo

Data Architect Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency. Role: Data Architect Experience: 6-10 Yrs Location: Udaipur , Jaipur, Bangalore Domain: Telecom Job Description: We are seeking an experienced Telecom Data Architect to join our team. In this role, you will be responsible for designing comprehensive data architecture and technical solutions specifically for telecommunications industry challenges, leveraging TM forum frameworks and modern data platforms. You will work closely with customers, and technology partners to deliver data solutions that address complex telecommunications business requirements including customer experience management, network optimization, revenue assurance, and digital transformation initiatives. Key Responsibilities: Design and articulate enterprise-scale telecom data architectures incorporating TMforum standards and frameworks, including SID (Shared Information/Data Model), TAM (Telecom Application Map), and eTOM (enhanced Telecom Operations Map) Develop comprehensive data models aligned with TMforum guidelines for telecommunications domains such as Customer, Product, Service, Resource, and Partner management Create data architectures that support telecom-specific use cases including customer journey analytics, network performance optimization, fraud detection, and revenue assurance Design solutions leveraging Microsoft Azure and Databricks for telecom data processing and analytics Conduct technical discovery sessions with telecom clients to understand their OSS/BSS architecture, network analytics needs, customer experience requirements, and digital transformation objectives Design and deliver proof of concepts (POCs) and technical demonstrations showcasing modern data platforms solving real-world telecommunications challenges Create comprehensive architectural diagrams and implementation roadmaps for telecom data ecosystems spanning cloud, on-premises, and hybrid environments Evaluate and recommend appropriate big data technologies, cloud platforms, and processing frameworks based on telecom-specific requirements and regulatory compliance needs. Design data governance frameworks compliant with telecom industry standards and regulatory requirements (GDPR, data localization, etc.) Stay current with the latest advancements in data technologies including cloud services, data processing frameworks, and AI/ML capabilities Contribute to the development of best practices, reference architectures, and reusable solution components for accelerating proposal development Required Skills: 6-10 years of experience in data architecture, data engineering, or solution architecture roles with at least 5 years in telecommunications industry Deep knowledge of TMforum frameworks including SID (Shared Information/Data Model), eTOM, TAM, and their practical implementation in telecom data architectures Demonstrated ability to estimate project efforts, resource requirements, and implementation timelines for complex telecom data initiatives Hands-on experience building data models and platforms aligned with TMforum standards and telecommunications business processes Strong understanding of telecom OSS/BSS systems, network management, customer experience management, and revenue management domains Hands-on experience with data platforms including Databricks, and Microsoft Azure in telecommunications contexts Experience with modern data processing frameworks such as Apache Kafka, Spark and Airflow for real-time telecom data streaming Proficiency in Azure cloud platform and its respective data services with an understanding of telecom-specific deployment requirements Knowledge of system monitoring and observability tools for telecommunications data infrastructure Experience implementing automated testing frameworks for telecom data platforms and pipelines Familiarity with telecom data integration patterns, ETL/ELT processes, and data governance practices specific to telecommunications Experience designing and implementing data lakes, data warehouses, and machine learning pipelines for telecom use cases Proficiency in programming languages commonly used in data processing (Python, Scala, SQL) with telecom domain applications Understanding of telecommunications regulatory requirements and data privacy compliance (GDPR, local data protection laws) Excellent communication and presentation skills with ability to explain complex technical concepts to telecom stakeholders Strong problem-solving skills and ability to think creatively to address telecommunications industry challenges Good to have TMforum certifications or telecommunications industry certifications Relevant data platform certifications such as Databricks, Azure Data Engineer are a plus Willingness to travel as required Educational Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm

Posted 5 days ago

Apply

8.0 - 12.0 years

5 - 10 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Teradata to Snowflake and Databricks on Azure Cloud,data migration projects, including complex migrations to Databricks,Strong expertise in ETL pipeline design and optimization, particularly for cloud environments and large-scale data migration

Posted 5 days ago

Apply

12.0 - 20.0 years

0 - 0 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Naukri logo

We at EMIDS, are hiring for Sr. Data Architect Role Please find the details below and share your interest at aarati.pardhi@emids.com Job Description : We are looking for a highly experienced Senior Data Architect with a strong background in big data technologies, cloud platforms, advanced analytics, and AI. The ideal candidate will lead the end-to-end design and architecture of scalable, high-performance data platforms using PySpark, Databricks, and major cloud platforms (Azure/AWS/GCP). A strong understanding of AI/ML pipeline integration and enterprise data strategy is essential. Key Responsibilities: Should have experience on Building Data and AI Products Lead the data architecture design across modern data platforms using PySpark, Databricks, and cloud-native technologies. Define data models, data flow, and architecture blueprints aligned with business and analytical requirements. Architect and optimize big data pipelines and AI/ML workflows, ensuring performance, scalability, and reliability. Collaborate with business stakeholders, data scientists, and engineers to enable advanced analytics and predictive modeling capabilities. Design and implement data lakehouses, ingestion frameworks, and transformation layers. Provide technical leadership and mentoring to data engineers and developers. Drive adoption of data governance, security, and metadata management practices. Evaluate emerging technologies and recommend tools to support enterprise data strategies.

Posted 5 days ago

Apply

2.0 - 5.0 years

5 - 8 Lacs

Chennai

Hybrid

Naukri logo

Role & responsibilities We are seeking a skilled SQL Developer with strong experience in Databricks and Power BI to join our data engineering and analytics team. The ideal candidate will have a solid foundation in SQL development, hands-on experience with Databricks for data processing, and proficiency in creating insightful dashboards using Power BI. Key Responsibilities: Design, develop, and optimize SQL queries, stored procedures, and data pipelines. Develop and maintain scalable data workflows using Azure Databricks . Integrate, transform, and consolidate data from various sources into data warehouses or lakes. Create, manage, and publish interactive dashboards and reports using Power BI . Work closely with data engineers, analysts, and business stakeholders to understand requirements and translate them into data solutions. Ensure data quality, integrity, and security in all deliverables. Troubleshoot performance issues and recommend solutions to optimize data processing and reporting performance. Required Skills: Strong proficiency in SQL , including query optimization and data modeling. Hands-on experience with Databricks (preferably on Azure Databricks ). Proficiency in Power BI dashboard creation, DAX, Power Query, data visualization. Familiarity with ETL/ELT processes and tools. Experience with cloud platforms (preferably Azure ). Understanding of data warehousing concepts and architecture.

Posted 5 days ago

Apply

6.0 - 9.0 years

22 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

We are looking for "Sr. Azure DevOps Engineer" with Minimum 6 years experience Contact- Atchaya (95001 64554) Required Candidate profile Exp in DevOps’s role, Data bricks, Terraform, Ansible, API Troubleshooting Azure platform issues. Snowflake provisioning and configuration skills

Posted 5 days ago

Apply

5.0 - 8.0 years

7 - 17 Lacs

Chennai

Work from Office

Naukri logo

Key Responsibilities : Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB, etc.). Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Perform data manipulation and analysis using Pandas, NumPy, and related Python libraries Develop and maintain high-performance REST APIs using FastAPI or Flas Ensure data integrity, quality, and availability across various sources Integrate data workflows with application components to support real-time or scheduled processes Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Communicate technical concepts effectively to non-technical stakeholders Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. Expertise in data modeling and design (relational, dimensional, NoSQL). Proven experience with ETL/ELT processes, data lakes, and modern lake house architectures. Solid experience with SQL Server or any major RDBMS; ability to write complex queries and stored procedures 3+ years of experience with Azure Data Factory, Azure Databricks, and PySpark Strong programming skills in Python, with solid understanding of Pandas and NumPy Proven experience in building REST APIs Good knowledge of data formats (JSON, Parquet, Avro) and API communication patterns Strong knowledge of data governance, security, and compliance frameworks. Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). Familiarity with BI and analytics tools such as Power BI or Tableau. Strong problem-solving skills and attention to performance, scalability, and security Excellent communication skills both written and verbal Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership.

Posted 5 days ago

Apply

7.0 - 12.0 years

20 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

TECHNICAL SKILLS AND EXPERIENCE Most important: 7+ years professional experience as a data engineer, with at least 4 utilizing cloud technologies. Proven experience building ETL or ETL data pipelines with Databricks either in Azure or AWS using PySpark language. Strong experience with the Microsoft Azure Data Stack (Databricks, Data Lake Gen2, ADF etc.) Strong SQL skills and proficiency in Python adhering to standards such as PEP Proven experience with unit testing and applying appropriate testing methodologies using libraries such as Pytest, Great Expectations, or similar. Demonstrable experience with CICD including release and test automation tools and processes such as Azure Devops, Terraform, Powershell and Bash scripting or similar. Strong understanding of data modeling, data warehousing, and OLAP concepts. Excellent technical documentation skills. Preferred candidate profile

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies