Jobs
Interviews

1401 Data Bricks Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

25 - 30 Lacs

Bengaluru

Hybrid

Looking For Immediate joiners Interested candidates, kindly revert with your updated resume to 'lavanya.n@miqdigital.com' Role: Data Management- Senior Software Engineer Location: Bangalore What you'll do We're MiQ, a global programmatic media partner for marketers and agencies. Our people are at the heart of everything we do, so you will be too. No matter the role or the location, were all united in the vision to lead the programmatic industry and make it better. As an SSE in our Technology department, you require Hands-on experience with Big Data technologies such as Databricks, Snowflake, EMR, Trino, Athena, StarTree, SageMaker Studio etc., with a strong foundation in data engineering concepts. Proficient in data processing and transformation using PySpark, SQL, and familiarity with at least one JVM-based language(Java/Scala/Kotlin). Familiarity with microservice integration in data systems; understands basic principles of interoperability and service communication. Solid experience in data pipeline development and familiarity with orchestration frameworks (e.g., Airflow, DBT, etc.), with an ability to build scalable and reliable ETL workflows. Exposure to MLOps/DataOps practices, with contributions to the rollout or maintenance of production pipelines. Knowledge of observability framework & and practices to support platform reliability and troubleshooting. Working knowledge of any observability tools (e.g., Prometheus, Grafana, Datadog) is highly desirable. Experience assisting in ETL optimization, platform issue resolution, and performance tuning in collaboration with other engineering teams. Good understanding of access management, including RBAC, ABAC, PBAC and familiarity with auditing & compliance basics. Practical experience with cloud infrastructure (AWS preferred), including EC2, S3, IAM, VPC basics, and Terraform or similar IaC tools. Understanding of CI/CD pipelines and ability to contribute to release automation, deployment strategies, and system testing. Interest in data governance, having working exposure to cataloging tools such as Unity Catalog, Amundsen, or Apache Atlas would be great Strong problem-solving skills with a collaborative mindset and passion for exploring AI tools, frameworks, and emerging technologies in the data space. Demonstrates ownership, initiative, and curiosity while contributing to research, platform improvements, and code quality standards. Who are your stakeholders 1. Business Analyst 2. Data Engineers 3. Data Scientist What you'll bring Technical Expertise: Deep understanding of the latest Big Data technologies (Spark Engines, Sql Engines like Databricks, Apache pinot, EMR, etc) and proficiency in building and managing complex data pipelines. Platform Optimisation: Experience in optimising platform performance and cost management, ensuring scalable solutions that meet organisational needs without exceeding budget. Innovation and R&D: A forward-thinking mindset with a passion for exploring new data technologies, continuously seeking ways to enhance platform capabilities and efficiency. Infrastructure Management: Proven experience managing cloud-based infrastructures, including networking, deployment, and monitoring, while ensuring reliability and high availability. Governance and Metadata Management: Knowledge of data governance frameworks, ensuring proper data cataloging, lineage, and metadata management to drive data quality and transparency. Proactive Problem Solver: Strong analytical skills, with a solution-oriented approach to overcoming technical challenges and finding innovative solutions for complex data problems. Cost Efficiency: Ability to implement cost optimisation strategies and ensure resource utilisation is efficient, helping the team minimise waste and maximise value. Security Best Practices: Implementing best practices in data security, including encryption, hashing, key management, and access controls, to protect sensitive data across platforms.

Posted 1 week ago

Apply

9.0 - 14.0 years

15 - 19 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 1 week ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform. The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions.The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization.This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets. Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting. Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink. Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset. Good understanding of open table formats like Delta and Iceberg. Scale data quality frameworks to ensure data accuracy and reliability. Build data lineage tracking solutions for governance, access control, and compliance. Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms. Improve system stability, monitoring, and observability to ensure high availability ofthe platform. Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack. Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment. Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms. Expertise in big data architectures using Databricks, Trino, and Debezium. Strong experience with streaming platforms, including Confluent Kafka. Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment. Hands-on experience implementing data quality checks using Great Expectations. Deep understanding of data lineage, metadata management, and governancepractices. Strong knowledge of query optimization, cost efficiency, and scaling architectures. Familiarity with OSS contributions and keeping up with industry trends in dataengineering.Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges. Excellent communication and collaboration skills to work effectively withcross-functional teams.Ability to lead large-scale projects in a fast-paced, dynamic environment. Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products.

Posted 1 week ago

Apply

9.0 - 14.0 years

30 - 35 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 1 week ago

Apply

8.0 - 12.0 years

16 - 18 Lacs

Chennai

Hybrid

Candidate Specification 8+ years Notice Period Immediate- Hybrid. Job Summary: We are looking for a skilled Power Apps Developer to support application development and testing activities. The ideal candidate will be responsible for creating and populating test data in development and UAT environmentsvalidating data accuracy and ensuring smooth application functionality. Key Responsibilities: Develop and maintain applications using Microsoft Power Apps (Canvas and Model-driven apps). Create and populate test data in development and UAT environments. Validate data integrity and ensure consistency across environments. Collaborate with QA and functional teams to support testing cycles. Troubleshoot and resolve issues related to data and app performance. Document processes and contribute to knowledge sharing. Required Skills: Hands-on experience with Power Apps Good to have Power Automate and Dataverse. Strong understanding of data modeling and validation techniques. Experience working in Dev and UAT environments. Familiarity with Microsoft 365SharePointand related technologies. Good communication and problem-solving skills. Preferred Qualifications: Microsoft Power Platform certifications. Experience with Azure services and integration. Knowledge of Agile methodologies. Contact Person : Christopher

Posted 2 weeks ago

Apply

8.0 - 12.0 years

55 - 65 Lacs

Bengaluru

Work from Office

Role & responsibilities 8+ years of experience as a Data Engineer, with a focus on Databricks and cloud-based data platforms, with a minimum of 4 years of experience in writing unit/end-to-end tests for data pipelines and ETL processes on Databricks. Hands-on experience in PySpark programming for data manipulation, transformation, and analysis. Strong experience in SQL and writing complex queries for data retrieval and manipulation. Experience in Docker for containerising and deploying data engineering applications is good to have. Strong knowledge of the Databricks platform and its components, including Databricks notebooks, clusters, and jobs. Experience in designing and implementing data models to support analytical and reporting needs will be an added advantage. Preferred candidate profile you will lead, design, implement, and maintain data processing pipelines and workflows using Databricks on the Azure platform. Your expertise in PySpark, SQL, Databricks, test-driven development, and Docker will be essential to the success of our data engineering initiatives.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Ahmedabad

Remote

Looking for expertise in AWS, Databricks & PySpark. Must understand data pipelines, ETL, and architecture. Build scalable pipelines, optimize workflows, ensure data quality, and work with teams to deliver data-driven solutions .

Posted 2 weeks ago

Apply

7.0 - 11.0 years

20 - 35 Lacs

Pune

Work from Office

Role & responsibilities Develop and Maintain Data Pipelines: Design, develop, and manage scalable ETL pipelines to process large datasets using PySpark, Databricks, and other big data technologies. Data Integration and Transformation: Work with various structured and unstructured data sources to build efficient data workflows and integrate them into a central data warehouse. Collaborate with Data Scientists & Analysts: Work closely with the data science and business intelligence teams to ensure the right data is available for advanced analytics, machine learning, and reporting. Optimize Performance: Optimize and tune data pipelines and ETL processes to improve data throughput and reduce latency, ensuring timely delivery of high-quality data. Automation and Monitoring: Implement automated workflows and monitoring tools to ensure data pipelines are running smoothly, and issues are proactively addressed. Ensure Data Quality: Build and maintain validation mechanisms to ensure the accuracy and consistency of the data. Data Storage and Access: Work with data storage solutions (e.g., Azure, AWS, Google Cloud) to ensure effective data storage and fast access for downstream users. Documentation and Reporting: Maintain proper documentation for all data processes and architectures to facilitate easier understanding and onboarding of new team members. Skills and Qualifications: Experience: 5+ years of experience as a Data Engineer or similar role, with hands-on experience in designing, building, and maintaining ETL pipelines. Technologies: Proficient in PySpark for large-scale data processing. Strong programming experience in Python , particularly for data engineering tasks. Experience working with Databricks for big data processing and collaboration. Hands-on experience with data storage solutions (e.g., AWS S3, Azure Data Lake, or Google Cloud Storage). Solid understanding of ETL concepts, tools, and best practices. Familiarity with SQL for querying and manipulating data in relational databases. Experience working with data orchestration tools such as Apache Airflow or Luigi is a plus. Data Modeling & Warehousing: Experience with data warehousing concepts and technologies (e.g., Redshift, Snowflake, or BigQuery). Knowledge of data modeling, data transformations, and dimensional modeling. Soft Skills: Strong analytical and problem-solving skills. Excellent communication skills, capable of explaining complex data processes to non-technical stakeholders. Ability to work in a fast-paced, collaborative environment and manage multiple priorities. Preferred Qualifications: Bachelor's or Masters degree in Computer Science, Engineering, or a related field. Certification or experience with cloud platforms like AWS , Azure , or Google Cloud . Experience in Apache Kafka or other stream-processing technologies.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Hyderabad, Bengaluru

Work from Office

Total Exp : 8 -12 yrs Rel Exp : 5+ years Location : Bangalore / Hyderabad Strong Python & PySpark Hands-on experience with Azure cloud services and Databricks, including: Databricks Jobs, Clusters, Unity Catalog ( Medium expertise ) CI/CD & GitHub

Posted 2 weeks ago

Apply

7.0 - 12.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : PySparkMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements in Hyderabad. You will play a crucial role in the development and implementation of software solutions. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Conduct code reviews and ensure coding standards are met- Stay updated on industry trends and best practices Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Good To Have Skills: Experience with Python (Programming Language)- Strong understanding of data analytics and data processing- Experience in building and configuring applications- Knowledge of software development lifecycle- Ability to troubleshoot and debug applications Additional Information:- The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education

Posted 2 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time educationummary:As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements in Hyderabad. You will play a crucial role in the development and implementation of software solutions. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Conduct code reviews and ensure coding standards are met- Stay updated on industry trends and best practices Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Good To Have Skills: Experience with Python (Programming Language)- Strong understanding of data analytics and data processing- Experience in building and configuring applications- Knowledge of software development lifecycle- Ability to troubleshoot and debug applications Additional Information:- The candidate should have a minimum of 12 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Pune

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with cloud computing platforms.- Strong understanding of application development methodologies.- Familiarity with data integration and ETL processes.- Experience in programming languages such as Python or Scala. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

12.0 - 15.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work seamlessly together to support the organization's data needs and objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals.- Accountable and responsible for team outcome and delivery. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration techniques and best practices.- Experience with cloud-based data solutions and architectures.- Familiarity with data governance frameworks and compliance standards.- Profound in Databricks and data transformations. Led teams before. Did conceptual work as well.- Ability to troubleshoot and optimize data workflows for performance. Additional Information:- The candidate should have minimum 12 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Analytics Good to have skills : Microsoft SQL Server, Python (Programming Language), AWS RedshiftMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities:Hands-on development experience in Data Warehousing and/or Software Development.Utilize tools and best practices to build, verify, and deploy data solutions efficiently.Perform data integration and sourcing activities across various platforms.Develop data assets to support optimized analysis for customer and regulatory outcomes.Provide ongoing support for data platforms, including problem and incident management.Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally.Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow).Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin.Advanced skills in SQL and Python.Working knowledge of UNIX, Spark, and Databricks. Additional Information:Position:Senior Analyst, Data EngineeringReports to:Manager, Data EngineeringDivision:Personal BankGroup:3Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred Qualification 15 years full time education

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Pune

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Python (Programming Language), AWS ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : Any technical graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using PySpark. Your typical day will involve working with PySpark, Oracle Procedural Language Extensions to SQL (PLSQL), and other related technologies to develop and maintain applications. Key Responsibilities:-Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions-Build and operate very large data warehouses or data lakes.-ETL optimization, designing, coding, & tuning big data processes using Apache Spark.-Build data pipelines & applications to stream and process datasets at low latencies.-Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Professional & Technical Skills: -Minimum of 1 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark-Minimum of 3 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery.-Minimum 2 years of Experience in one or more programming languages Python, Java, Scala- Experience using airflow for the data pipelines in min 1 project-1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful software solutions.- This position is based at our Hyderabad office.-Resource is willing to work in B shift 12 to 10pm Qualification Any technical graduation

Posted 2 weeks ago

Apply

7.0 - 12.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Amazon Web Services (AWS)Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements in Hyderabad. You will play a crucial role in the development and implementation of software solutions. Key ResponsibilitiesWork on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. Build data pipelines & applications to stream and process datasets at low latencies. Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience:Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake.Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. Minimum of 2 years of experience years in real time streaming using Kafka/KinesisMinimum 4 year of Experience in one or more programming languages Python, Java, Scala.Experience using airflow for the data pipelines in min 1 project.1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes:Ready to work in B Shift (12 PM 10 PM) A Client facing skills:solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders.Good critical thinking and problem-solving abilities Health care knowledge Good Communication Skills Educational Qualification:Bachelor of Engineering / Bachelor of Technology Additional Information:Data Engineering, PySpark, AWS, Python Programming Language, Apache Spark, Databricks, Hadoop, Certifications in Databrick or Python or AWS. Additional Information:- The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), Amazon Web Services (AWS)Minimum 7.5 year(s) of experience is required Educational Qualification : Any Graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using PySpark. Your typical day will involve working with PySpark, Oracle Procedural Language Extensions to SQL (PLSQL), and other related technologies to develop and maintain applications. Key Responsibilities:-Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions-Build and operate very large data warehouses or data lakes.-ETL optimization, designing, coding, & tuning big data processes using Apache Spark.-Build data pipelines & applications to stream and process datasets at low latencies.-Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Professional & Technical Skills: -Minimum of 1 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark-Minimum of 3 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery.-Minimum 2 years of Experience in one or more programming languages Python, Java, Scala- Experience using airflow for the data pipelines in min 1 project-1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful software solutions.- This position is based at our Hyderabad office.-Resource is willing to work in B shift 12 to 10pm Qualification Any Graduation

Posted 2 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : Microsoft SQL Server, Python (Programming Language), Snowflake Data WarehouseMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities:Hands-on development experience in Data Warehousing and/or Software Development.Utilize tools and best practices to build, verify, and deploy data solutions efficiently.Perform data integration and sourcing activities across various platforms.Develop data assets to support optimized analysis for customer and regulatory outcomes.Provide ongoing support for data platforms, including problem and incident management.Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally.Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow).Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin.Advanced skills in SQL and Python.Working knowledge of UNIX, Spark, and Databricks. Additional Information:Position:Senior Analyst, Data EngineeringReports to:Manager, Data EngineeringDivision:Personal BankGroup:3Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred Qualification 15 years full time education

Posted 2 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Visualization Good to have skills : Microsoft SQL Server, SAS BI, Microsoft Power Business Intelligence (BI)Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities:Hands-on development experience in Data Warehousing and/or Software Development.Utilize tools and best practices to build, verify, and deploy data solutions efficiently.Perform data integration and sourcing activities across various platforms.Develop data assets to support optimized analysis for customer and regulatory outcomes.Provide ongoing support for data platforms, including problem and incident management.Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally.Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow).Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin.Advanced skills in SQL and Python.Working knowledge of UNIX, Spark, and Databricks. Additional Information:Position:Senior Analyst, Data EngineeringReports to:Manager, Data EngineeringDivision:Personal BankGroup:3Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred Qualification 15 years full time education

Posted 2 weeks ago

Apply

15.0 - 20.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Power Business Intelligence (BI), Microsoft Azure DatabricksMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work seamlessly together to support the organization's data needs and objectives. Your role will require you to analyze requirements, propose solutions, and contribute to the continuous improvement of the data platform, making it a dynamic and engaging work environment. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Microsoft Power Business Intelligence (BI), Microsoft Azure Databricks.- Strong understanding of data integration techniques and best practices.- Experience with data modeling and database design.- Familiarity with cloud-based data solutions and architectures. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

15.0 - 20.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work seamlessly together to support the organization's data needs and objectives. Your role will require you to analyze requirements, propose solutions, and contribute to the overall strategy of the data platform, making it a dynamic and impactful position within the team. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration techniques and best practices.- Experience with cloud-based data solutions and architectures.- Familiarity with data governance frameworks and compliance standards.- Ability to work with large datasets and perform data analysis.- Profound in Databricks, delivered on a project before. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

15.0 - 25.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :Highly experienced Senior Databricks Expert to provide critical technical guidance and support for the productionization of key data products. This role requires a deep understanding of the Databricks Lakehouse Platform and a pragmatic approach to implementing robust, scalable, and performant solutions. Roles & Responsibilities:- Expected to be a Subject Matter Expert with deep knowledge and experience.- Should have influencing and advisory skills.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate workshops and discussions to gather requirements and feedback from stakeholders.- Mentor junior professionals in best practices and emerging technologies. Professional & Technical Skills: This role requires a deep understanding of the Databricks Lakehouse Platform and a pragmatic approach to implementing robust, scalable, and performant solutions.Key Technical Skills & Experience:Must Have:Databricks Platform Mastery:Deep, hands-on expertise across the Databricks Lakehouse Platform, including Delta Lake, Spark SQL, Databricks compute, and an understanding of Unity Catalog principles. Advanced Spark Development & Optimization:Proven ability to write, review, and significantly optimize complex Apache Spark (Python/Scala) applications for performance, stability, and efficiency in production. Production Data Engineering & Architecture:Strong experience in designing, validating, and troubleshooting production-grade data pipelines and architectures on Databricks, adhering to data modeling and software engineering best practices. CI/CD & DevOps for Databricks:Practical experience implementing and advising on CI/CD practices for Databricks projects (e.g., using Databricks CLI, Repos, dbx, Azure DevOps, GitHub Actions) for automated testing and deployment. Databricks Security & Governance:Solid understanding of Databricks security features, including access control models, secrets management, and network configurations, with the ability to advise on their practical application. Operational Excellence on Databricks:Experience with monitoring, logging, alerting, and performance tuning strategies for Databricks jobs and clusters to ensure operational reliability and efficiency. Machine Learning on Databricks:Experience with ML on Databricks:MLflow, Model Serving, and best practices for securely exposing model endpoints. Problem-Solving & Mentorship:Excellent analytical and troubleshooting skills, with a proven ability to diagnose complex issues and effectively communicate solutions and best practices to technical teams in a supportive, advisory capacity.Good to have:Familiarity with Google Cloud Platform Additional Information:- The candidate should have minimum 15 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Python (Programming Language)Minimum 7.5 year(s) of experience is required Educational Qualification : Any graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Databricks Unified Data Analytics Platform. Your typical day will involve working with Python and utilizing your expertise in software development to deliver impactful solutions. The ideal candidate will work in a team environment that demands technical excellence, whose members are expected to hold each other accountable for the overall success of the end product. Focus for this team is on the delivery of innovative solutions to complex problems, but also with a mind to drive simplicity in refining and supporting of the solution by others & Responsibilities-Be accountable for delivery of business functionality. -Work on the AWS cloud to migrate/re-engineer data and applications from on premise to cloud. -Responsible for engineering solutions conformant to enterprise standards, architecture, and technologies -Provide technical expertise through a hands-on approach, developing solutions that automate testing between systems. -Perform peer code reviews, merge requests and production releases. -Implement design/functionality using Agile principles. -Proven track record of quality software development and an ability to innovate outside of traditional architecture/software patterns when needed. -A desire to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. -Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact. -Be entrepreneurial, business minded, ask smart questions, take risks, and champion new ideas. -Take ownership and accountability. Professional & Technical Skills: -Minimum of 1 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark-Minimum of 3 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery.-Minimum 2 years of Experience in one or more programming languages Python, Java, Scala- Experience using airflow for the data pipelines in min 1 project-1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful software solutions.- This position is based at our Hyderabad office.-Resource is willing to work in B shift 12 to 10pm Qualification Any graduation

Posted 2 weeks ago

Apply

5.0 - 8.0 years

6 - 11 Lacs

Navi Mumbai

Work from Office

Skill required: Network Billing Operations - Problem Management Designation: Network & Svcs Operation Senior Analyst Qualifications: Any Graduation Years of Experience: 5 to 8 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Helps transform back office and network operations, reduce time to market and grow revenue, by improving customer experience and capex efficiency, and reducing cost-to-serveGood Customer Support Experience preferred with good networking knowledgeManage problems caused by information technology infrastructure errors to minimize their adverse impact on business and to prevent their recurrence by seeking the root cause of those incidents and initiating actions to improve or correct the situation. What are we looking for 5 years of programming skills- advanced level in relation to responsibility for maintenance of existing & creation of new queries via SQL scripts, Python, PySpark programming skills, experience with Databricks, Palantir is advantage Other skillsMust be self-motivated and understand short turnaround expectations Desire to learn and understand data models and billing processes Critical thinking Experience with reporting and metrics- strong numerical skills Experience in expense, billing, or financial management Experience in process/system management Good organizational skills, self-disciplined, systematic approach with good interpersonal skills Flexible, Analytical mind, Problem solver Knowledge of Telecom Products and Services Roles and Responsibilities: In this role you are required to do analysis and solving of increasingly complex problems Your day to day interactions are with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instruction on new assignments Decisions that are made by you impact your own work and may impact the work of others In this role you would be an individual contributor and/or oversee a small work effort and/or team Please note that this role may require you to work in rotational shifts Qualification Any Graduation

Posted 2 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Analytics Good to have skills : Microsoft SQL Server, Python (Programming Language), AWS RedshiftMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary The purpose of the Data Engineering function within the Data and Analytics team is to develop and deliver great data assets and data domain management for our Personal Banking customers and colleagues seamlessly and reliably every time.As a Senior Data Engineer, you will bring expertise on data handling, curation and conformity capabilities to the team; support the design and development of solutions which assist analysis of data to drive tangible business benefit; and assist colleagues in developing solutions that will enable the capture and curation of data for analysis, analytical and/or reporting purposes. The Senior Data Engineer must be experience working as part of an agile team to develop a solution in a complex enterprise. Roles & ResponsibilitiesHands on development experience in Data Warehousing, and or Software DevelopmentExperience utilising tools and practices to build, verify and deploy solutions in the most efficient waysExperience in Data Integration and Data Sourcing activitiesExperience developing data assets to support optimised analysis for customer and regulatory outcomes.Provide ongoing support for platforms as required e.g. problem and incident managementExperience in Agile software development including Github, Confluence, Rally Professional & Technical SkillsExperience with cloud technologies, especially AWS (S3, Redshift, Airflow), DevOps and DataOps tools (Jenkins, Git, Erwin)Advanced SQL and Python userKnowledge of UNIX, Spark and Databricks Additional InformationPosition:Senior Analyst, Data EngineeringReports to:Manager, Data EngineeringDivision:Personal BankGroup:3Industry/domain skills:Some expertise in Retail Banking, Business Banking and or Wealth Management preferred Qualification 15 years full time education

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies