Jobs
Interviews

1301 Data Bricks Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Remote

Job Requirement for Offshore Data Engineer (with ML expertise) Work Mode: Remote Base Location: Bengaluru Experience: 5+ Years Technical Skills & Expertise: PySpark & Apache Spark: Extensive experience with PySpark and Spark for big data processing and transformation. Strong understanding of Spark architecture, optimization techniques, and performance tuning. Ability to work with Spark jobs in distributed computing environments like Databricks. Data Mining & Transformation: Hands-on experience in designing and implementing data mining workflows. Expertise in data transformation processes, including ETL (Extract, Transform, Load) pipelines. Experience in large-scale data ingestion, aggregation, and cleaning. Programming Languages: Python & Scala: Proficient in Python for data engineering tasks, including using libraries like Pandas and NumPy. Scala proficiency is preferred for Spark job development. Big Data Concepts: In-depth knowledge of big data frameworks and paradigms, such as distributed file systems, parallel computing, and data partitioning. Big Data Technologies: Cassandra & Hadoop: Experience with NoSQL databases like Cassandra and distributed storage systems like Hadoop. Data Warehousing Tools: Proficiency with Hive for data warehousing solutions and querying. ETL Tools: Experience with Beam architecture and other ETL tools for large-scale data workflows. Cloud Technologies (GCP): Expertise in Google Cloud Platform (GCP), including core services like Cloud Storage, BigQuery, and DataFlow. Experience with DataFlow jobs for batch and stream processing. Familiarity with managing workflows using Airflow for task scheduling and orchestration in GCP. Machine Learning & AI: GenAI Experience: Familiarity with Generative AI and its applications in ML pipelines. ML Model Development: Knowledge of basic ML model building using tools like Pandas, NumPy, and visualization with Matplotlib. ML Ops Pipeline: Experience in managing end-to-end ML Ops pipelines for deploying models in production, particularly LLM (Large Language Models) deployments. RAG Architecture: Understanding and experience in building pipelines using Retrieval-Augmented Generation (RAG) architecture to enhance model performance and output. Tech stack : Spark, Pyspark, Python, Scala, GCP data flow, Data composer (Air flow), ETL, Databricks, Hadoop, Hive, GenAI, ML Modeling basic knowledge, ML Ops experience , LLM deployment, RAG

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Remote

Job Title: Senior Machine Learning Engineer Work Mode: Remote Base Location: Bengaluru Experience: 5+ Years Strong problem-solving skills and ability to work in a fast-paced, collaborative environment. Strong programming skills in Python and experience with ML frameworks. Proficiency in containerization (Docker) and orchestration (Kubernetes) technologies. Solid understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI, GitHub Actions). Knowledge of data engineering concepts and experience building data pipelines. Strong understandings on Computational, Storage and Orchestration resources on cloud platforms. Deploying and managing ML models especially on GCP (cloud platform agnostic though) services such as Cloud Run, Cloud Functions, and Vertex AI. Implementing MLOps best practices, including model version tracking, governance, and monitoring for performance degradation and drift. Creating and using benchmarks, metrics, and monitoring to measure and improve services Collaborating with data scientists and engineers to integrate ML workflows from onboarding to decommissioning. Experience with MLOps tools like Kubeflow, MLflow, and Data Version Control (DVC). Manage ML models on any of the following: AWS (SageMaker), Azure (Machine Learning), and GCP (Vertex AI). Tech Stack : Aws or GCP or Azure Experience. (More GCP Specific) must have done Py spark, Databricks is good. ML Experience, Docker and Kubernetes.

Posted 1 month ago

Apply

2.0 - 7.0 years

8 - 12 Lacs

Hyderabad

Work from Office

About The Role Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters ? Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities ? 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders ? 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally ? Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 month ago

Apply

3.0 - 5.0 years

15 - 22 Lacs

Bengaluru

Remote

Role & responsibilities Design real-time data pipelines for structured and unstructured sources. Collaborate with analysts and data scientists to create impactful data solutions. Continuously improve data infrastructure based on team feedback. Take full ownership of complex data problems and iterate quickly. Promote strong documentation and engineering best practices. Monitor, detect, and fix data quality issues with custom tools. Preferred candidate profile Experience with big data tools like Spark, Hadoop, Hive, and Kafka. Proficient in SQL and working with relational databases. Hands-on experience with cloud platforms (AWS, GCP, or Azure). Familiar with workflow tools like Airflow.

Posted 1 month ago

Apply

10.0 - 15.0 years

11 - 15 Lacs

Hyderabad, Coimbatore

Work from Office

Azure+ SQL+ ADF+ Databricks +design+ Architecture( Mandate) Total experience in data management area for 10 + years with Azure cloud data platform experience Architect with Azure stack (ADLS, AALS, Azure Data Bricks, Azure Streaming Analytics Azure Data Factory, cosmos DB & Azure synapse) & mandatory expertise on Azure streaming Analytics, Data Bricks, Azure synapse, Azure cosmos DB Must have worked experience in large Azure Data platform and dealt with high volume Azure streaming Analytics Experience in designing cloud data platform architecture, designing large scale environments 5 plus Years of experience architecting and building Cloud Data Lake, specifically Azure Data Analytics technologies and architecture is desired, Enterprise Analytics Solutions, and optimising real time 'big data' data pipelines, architectures and data sets.

Posted 1 month ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

Chennai

Work from Office

Mandatory Skills: Azure DevOps, CI/CD Pipelines, Kubernetes, Docker, Cloud Tech stack and ADF, Spark, Data Bricks, Jenkins, Build Java based application, Java Web, GIT, J2E. -To design and develop automated deployment arrangements by leveraging configuration management technology. -Implementing various development, testing, automation tools, and IT infrastructure. -Selecting and deploying appropriate CI/CD tools. Required Candidate profile -Implementing various development, testing, automation tools, and IT infrastructure. -Selecting and deploying appropriate CI/CD tools.

Posted 1 month ago

Apply

6.0 - 10.0 years

11 - 21 Lacs

Bengaluru

Work from Office

Design and implement scalable data ingestion and transformation pipelines using Databricks and AWS. Develop and optimize ETL/ELT workflows in PySpark and Spark SQL, ensuring performance and reliability, and use CI/CD tools. Required Candidate profile Experience Required 6 to 9 years. Minimum 4+ years of experience with Databricks and AWS.Design and develop scalable ETL/ELT pipelines, PySpark SQL and Python. Immediate joiner to 30 days NP required.

Posted 1 month ago

Apply

3.0 - 6.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform.

Posted 1 month ago

Apply

3.0 - 7.0 years

8 - 12 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

About Us At SentinelOne, were redefining cybersecurity by pushing the limits of whats possible?leveraging AI-powered, data-driven innovation to stay ahead of tomorrows threats From building industry-leading products to cultivating an exceptional company culture, our core values guide everything we do Were looking for passionate individuals who thrive in collaborative environments and are eager to drive impact If youre excited about solving complex challenges in bold, innovative ways, wed love to connect with you Who are we The Data team is tasked with providing a world-class data platform that enables unrivalled cost, performance, and scalability for SentinelOne and our customers The exponential growth in volumes of data, users of data, and types of data calls for a new modern architecture that addresses the new data requirements for enterprise organizations Help us get this platform into the hands of customers and support them in their mission to affordably collect and retain their most critical asset data SentinelOne is shaping the converged future of security and data through its unified data platform This is a unique opportunity to operate in an emerging ?startuplike environment within SentinelOne to build and scale our data business beyond just security use cases What are we looking for We are looking for a team member who puts the customer first and is passionate about solving problems with creativity, compassion, and technical acumen You will need to bring a combination of technical, business, strategic and problem-solving skills to the team to support pre-sales efforts and as a data subject matter expert to the larger SentinelOne team Looking for an individual who is smart, passionate about data, and who brings a sense of joy and teamwork to everything they do As a Sr Solutions Engineer, you will illustrate SentinelOne's value to prospective customers We need a self-starter who excels in a high-paced startup environment and thrives on pitching revolutionary technology to many areas of an organisation, including C-level executives, security engineers, IT operations, DevOps, and Engineering professionals They should be willing to ?wear many hats? and step up and drive solutions to problems related to external and internal needs This individual will be instrumental in accelerating our sales, strategic initiatives, and growing SentinelOne What skills and knowledge should you bring 5+ years of experience as a Solutions (Sales) Engineer or Architect BS/BA degree or equivalent technical experience is desired, but love a well-rounded candidate with a broad range of interests and talents Strong background with big data platforms (Cassandra, Hadoop, etc-), data lakes (Snowflake, DataBricks), streaming analytics (Kafka), log management (ElasticSearch, SumoLogic, etc-), or SIEM (Splunk, Devo, Qradar, Exabeam, etc-) Some code writing proficiency is desired (C/C++, Shell, Perl, Python) Experience with RegEx and writing parsers Background in cloud providers (AWS, Azure, Google)and technologies such as Kubernetes Ability to demonstrate product value and use cases, both customer-specific and generic Demonstrable experience in objection handling and positioning against competitive or alternative technologies, including how to transition to new data pipelines Use concise written and oral communication skills to effectively lead business and technical presentations, demonstrations, and conversations with both executives and technical audiences Fluency in English is required Must have demonstrable experience successfully selling to mid-to-large customers and working across an organisation to get technical buy-in and acceptance Drive the Evaluation/POC through a defined process Provide timely consultation and build a strong relationship with the technical buyer or champion Provide 1st-level technical support throughout the sales process with involvement as it is transitioned to customer success Availability to travel to visit prospects and customers (usually no more than 20-25% and as required) What will you do The principal responsibilities for this position are to generate revenue from Strategic Accounts across the region through following up on multiple lead sources, developing new clients and selling directly to customers while leveraging our channel community In this position, you will: Run a sophisticated sales process from prospecting to closure Partner with our channel team to drive both net new and recurring revenue Partner with channel managers to build pipeline and grow the assigned territory Become an insider within the Cyber Security Industry and become an expert in SentinelOne products Stay well educated and informed about SentinelOne's competitive landscape and how to sell the value of our solutions and services when compared to the relevant competitors in the Next Generation Endpoint market space Consistently meet or exceed sales quotas Why us You will be joining a cutting-edge company where you will tackle extraordinary challenges and work with the very best in the industry Health Insurance Industry-leading gender-neutral parental leave Paid Company Holidays Paid Sick Time Employee stock purchase program Employee assistance program Gym membership reimbursement Wifi/Cell phone reimbursement Numerous company-sponsored events, including regular happy hours and team-building events SentinelOne is proud to be an Equal Employment Opportunity and Affirmative Action employer We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics SentinelOne participates in the E-Verify Program for all U S based roles Show more Show less

Posted 1 month ago

Apply

3.0 - 4.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Location: Remote Employment Type: Full-Time Experience: 3+ Years About Us We're a fast-growing company driven by data We're looking for a skilled and enthusiastic Junior Data Engineer to join our team and help us shape the future of our data infrastructure This is a fully remote role work from wherever you're most productive If you're passionate about data and eager to make a real impact, we want to hear from you About the Role As a Junior Data Engineer, you'll be a key player in our data engineering efforts You'll be working hands-on, collaborating with a talented team, and contributing directly to the development and maintenance of our data pipelines and infrastructure This role offers a unique opportunity to learn, grow, and make a tangible difference in how we leverage data What You'll Do: Design and build robust data pipelines using tools like Databricks, Spark, and PySpark Develop and maintain our data warehouse and data models, ensuring they meet the needs of our analytics and operations teams Dive into data transformation and processing with SQL and Python Partner with engineers, analysts, and stakeholders across the company to understand their data needs and deliver effective solutions Maintain clean and organized code using Git Contribute to our ongoing efforts to improve data quality and ensure data integrity What You'll Need: 3+ years of experience in data engineering A solid understanding of cloud platforms like AWS or Azure Strong skills in Python, SQL, Spark, and PySpark Practical experience with cloud-based ETL tools A genuine passion for problem-solving and a desire to learn and grow Excellent communication skills and a collaborative spirit Bonus Points: Experience with DevOps tools (Docker, Terraform, Airflow, GitHub Actions the more the merrier) Familiarity with CI/CD pipelines and infrastructure as code A knack for optimizing workflows and boosting performance What We Offer: 100% remote work work from anywhere! A supportive and collaborative team environment Opportunities for professional development and growth A competitive salary and benefits package Show more Show less

Posted 1 month ago

Apply

4.0 - 6.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Overview We are seeking a highly motivated Data Analyst with strong technical and analytical skills to join our ADAS (Advanced Driver Assistance Systems) team. This role involves working with large-scale data from vehicle systems to drive insights, support data science initiatives, and contribute to the development of safer and smarter automotive technologies. Responsibilities: Perform data cleansing, aggregation, and analysis on large, complex datasets related to ADAS components and systems. Build, maintain, and update dashboards and data visualizations to communicate insights effectively (Power BI preferred). Develop and optimize data pipelines and ETL processes. Create and maintain technical documentation, including data catalogs and process documentation. Collaborate with cross-functional teams including data scientists, software engineers, and system engineers. Contribute actively to the internal data science community by sharing knowledge, tools, and best practices. Work independently on assigned projects, managing priorities and delivering results in a dynamic, unstructured environment. Required Qualifications: Bachelors degree or higher in Computer Science, Data Science, or a related field. Minimum 3 years of experience in the IT industry, with at least 2 years in data analytics or data engineering roles. Proficient in Python or Pyspark with solid software development fundamentals. Strong experience with SQL and relational databases. Hands-on experience with data science, data engineering, or machine learning techniques. Knowledge of data modeling, data warehousing concepts, and ETL processes. Familiarity with data visualization tools (Power BI preferred). Basic understanding of cloud platforms such as Azure or AWS. Fundamental knowledge of ADAS functionalities is a plus. Strong problem-solving skills, self-driven attitude, and the ability to manage projects independently. Preferred Skills: Experience in automotive data or working with sensor data (e.g., radar, lidar, cameras). Familiarity with agile development methodologies. Understanding of big data tools and platforms such as Databricks or Spark. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Is fully competent in it's own area and has a deep understanding of related programming concepts software design and software development principles. Works autonomously with minimal supervision. Able to act as a key contributor in a complex environment, lead the activities of a team for software design and software development. Acts proactively to understand internal/external client needs and offers advice even when not asked. Able to assess and adapt to project issues, formulate innovative solutions, work under pressure and drive team to succeed against its technical and commercial goals. Aware of profitability needs and may manage costs for specific project/work area. Explains difficult concepts to a variety of audiences to ensure meaning is understood. Motivates other team members and creates informal networks with key contacts outside own area. Skills (competencies) Verbal Communication

Posted 1 month ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Your role will be pivotal in shaping the direction of application projects and ensuring that they meet the needs of the organization and its clients. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and implement necessary adjustments to ensure timely delivery. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with cloud computing platforms such as AWS or Azure.- Strong understanding of data engineering principles and practices.- Experience in application development using modern programming languages.- Familiarity with Agile methodologies and project management tools. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

6.0 - 7.0 years

14 - 18 Lacs

Bengaluru

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 1 month ago

Apply

5.0 - 7.0 years

14 - 18 Lacs

Bengaluru

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 1 month ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.

Posted 1 month ago

Apply

7.0 - 12.0 years

14 - 18 Lacs

Mumbai

Work from Office

Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 7+ Yrs total experience in Data Engineering projects & 4+ years of relevant experience on Azure technology services and Python Azure Azure data factory, ADLS- Azure data lake store, Azure data bricks, Mandatory Programming languages Py-Spark, PL/SQL, Spark SQL Database SQL DB Experience with AzureADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languagesPython, SQL, Scala, Spark-SQL etc. Data Warehousing experience with strong domain Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Experience with AzureADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languagesPython, SQL, Scala, Spark-SQL etc.

Posted 1 month ago

Apply

3.0 - 5.0 years

7 - 12 Lacs

Pune

Work from Office

Identifying business problems, understand the customer issue and fix the issue. Evaluating reoccurring issues and work on for permanent solution Focus on service improvement. Troubleshooting technical issues and design flaws Working both individually and on a team to deliver work on time Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise BE / B Tech in any stream, M.Sc. (Computer Science/IT) / M.C.A, with Minimum 3-5 years of experience Azure IAAS, PASS & SAAS services expert and Handson experience on these and below all services. VM, Storage account, Load Balancer, Application Gateway, VNET, Route Table, Azure Bastion, Disaster Recovery, Backup, NSG, Azure update manager, Key Vault etc. Azure Web App, Function App, Logic App, AKS (Azure Kubernetes Service) & containerization, Docker, Event Hub, Redis Cache, Service Mess and ISTIO, App insight, Databricks, AD, DNS, Log Analytic Workspace, ARO (Azure Red Openshift) Orchestration & Containerization Docker, Kubernetes, RedHat OpenShift Security Management - Firewall Mgmt, FortiGate firewall Preferred technical and professional experience Monitoring through Cloud Native tools (CloudWatch, Cloud Trail, Azure Monitor, Activity Log, VRops and LogInsight) Server monitoring and Management (Windows, Linux, AIX AWS Linux, Ubuntu Linux) Storage Monitoring and Management (Blob, s3, EBS, Backups, recovery, Snapshots)

Posted 1 month ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes

Posted 1 month ago

Apply

3.0 - 5.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Identifying business problems, understand the customer issue and fix the issue. Evaluating reoccurring issues and work on for permanent solution Focus on service improvement. Troubleshooting technical issues and design flaws Working both individually and on a team to deliver work on time Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise BE / B Tech in any stream, M.Sc. (Computer Science/IT) / M.C.A, with Minimum 3-5 years of experience Azure IAAS, PASS & SAAS services expert and Handson experience on these and below all services. VM, Storage account, Load Balancer, Application Gateway, VNET, Route Table, Azure Bastion, Disaster Recovery, Backup, NSG, Azure update manager, Key Vault etc. Azure Web App, Function App, Logic App, AKS (Azure Kubernetes Service) & containerization, Docker, Event Hub, Redis Cache, Service Mess and ISTIO, App insight, Databricks, AD, DNS, Log Analytic Workspace, ARO (Azure Red Openshift) Orchestration & Containerization Docker, Kubernetes, RedHat OpenShift Security Management - Firewall Mgmt, FortiGate firewall Preferred technical and professional experience Monitoring through Cloud Native tools (CloudWatch, Cloud Trail, Azure Monitor, Activity Log, VRops and Log Insight) Server monitoring and Management (Windows, Linux, AIX AWS Linux, Ubuntu Linux) Storage Monitoring and Management (Blob, s3, EBS, Backups, recovery, Snapshots

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 18 Lacs

Bengaluru

Work from Office

As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences

Posted 1 month ago

Apply

1.0 - 3.0 years

3 - 7 Lacs

Chennai

Hybrid

Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake

Posted 1 month ago

Apply

4.0 - 9.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Mumbai

Work from Office

Roles & Responsibilities: Resource must have 5+ years of hands on experience in Azure Cloud development (ADF + DataBricks) - mandatory Strong in Azure SQL and good to have knowledge on Synapse / Analytics Experience in working on Agile Project and familiar with Scrum/SAFe ceremonies. Good communication skills - Written & Verbal Can work directly with customer Ready to work in 2nd shift Good in communication and flexible Defines, designs, develops and test software components/applications using Microsoft Azure- Data-bricks, ADF, ADL, Hive, Python, Data bricks, SparkSql, PySpark. Expertise in Azure Data Bricks, ADF, ADL, Hive, Python, Spark, PySpark Strong T-SQL skills with experience in Azure SQL DW Experience handling Structured and unstructured datasets Experience in Data Modeling and Advanced SQL techniques Experience implementing Azure Data Factory Pipelines using latest technologies and techniques. Good exposure in Application Development. The candidate should work independently with minimal supervision

Posted 1 month ago

Apply

10.0 - 15.0 years

15 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities The Application Head is Responsible for overseeing the Deployment and Management of Enterprise Applications with a strong focus on S/4HANA and Retail Industry Solutions. Extensive Experience in both SAP and non SAP Enterprise Solution Deployments, Ensuring Seamless Integration and Operational Efficiency across the Organization. Oversee the Deployment, Management and Optimization of S/4HANA and other Enterprise Applications. Ensure Seamless Integration of SAP and non-SAP Solutions to support Business Processes. Qualifications Bachelors or Master’s degree in Computer Science, Information Technology or related field. 10+ Years of Experience in Application Management and Deployment Roles. Strong knowledge of S/4HANA, Retail Industry Solutions and non-SAP Enterprise Applications. Preferred Skill Experience in Retail (Brand Retailing is a plus). Familiarity with Tools like Power BI, Tableau, Databricks, Snowflake or similar. Understanding of Ethical AI and Responsible Data Use.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Gurugram

Work from Office

DataOps Specialist - Azure - 5+ Years - Gurugam Are you a data enthusiast with expertise in Azure and DataOps? Do you have experience working with data pipelines, data warehousing, and analytics? Our client, a leading organization in Gurugam, is looking for a DataOps Specialist with 5+ years of experience. If you are passionate about leveraging data to drive business insights and decisions, this role is for you! Location : Gurugam Your Future Employer : Our client is a prominent player in the industry and is committed to creating an inclusive and diverse work environment. They offer ample opportunities for professional growth and development, along with a supportive and collaborative culture. Responsibilities Design, build, and maintain data pipelines on Azure platform Work on data warehousing solutions and data modeling Collaborate with cross-functional teams to understand data requirements and provide solutions Implement and manage data governance and security practices Troubleshoot and optimize data processes for performance and reliability Stay updated with the latest trends and technologies in DataOps and analytics Requirements 5+ years of experience in data engineering, DataOps, or a related field Proven expertise in working with Azure data services such as Azure Data Factory, Azure Synapse Analytics, etc. Strong understanding of data warehousing concepts and data modeling techniques Proficiency in SQL, Python, or other scripting languages Experience with data governance, security, and compliance Excellent communication and collaboration skills What's in it for you : As a DataOps Specialist, you will have the opportunity to work on cutting-edge data technologies and make a significant impact on the organization's data initiatives. You will be part of a supportive team that values innovation and encourages continuous learning and development. Reach us : If you feel this opportunity is well aligned with your career progression plans, please feel free to reach me with your updated profile at rohit.kumar@crescendogroup.in Disclaimer : Crescendo Global specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status or disability status. Note : We receive a lot of applications on a daily basis so it becomes a bit difficult for us to get back to each candidate. Please assume that your profile has not been shortlisted in case you don't hear back from us in 1 week. Your patience is highly appreciated. Scammers can misuse Crescendo Globals name for fake job offers. We never ask for money, purchases, or system upgrades. Verify all opportunities at www.crescendo-global.com and report fraud immediately. Stay alert! Profile keywords : DataOps, Azure, Data Engineering, Data Warehousing, Analytics, SQL, Python, Data Governance

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies