Home
Jobs

113 Distributed Computing Jobs - Page 5

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3 - 5 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : A Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :Overall 3+ years of experience working in Data Analytics projectsMUST be able to understand ETL technologies code (Ab Initio) an translate into Azure native tools or PysparkMUST have worked on complex projectsGood to have1. Good to have any ETL tool development experience2. Good to have Cloud (Azure) exposure or experienceAs an Application Lead, you will be responsible for designing, building, and configuring applications using PySpark. Your typical day will involve leading the effort to develop and deploy PySpark applications, collaborating with cross-functional teams, and ensuring timely delivery of high-quality solutions. Roles & Responsibilities: Lead the effort to design, build, and configure PySpark applications, acting as the primary point of contact. Collaborate with cross-functional teams to ensure timely delivery of high-quality solutions. Develop and deploy PySpark applications, utilizing best practices and ensuring adherence to coding standards. Provide technical guidance and mentorship to junior team members, fostering a culture of continuous learning and improvement. Stay updated with the latest advancements in PySpark and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Proficiency in PySpark. Good To Have Skills:Experience with Hadoop, Hive, and other Big Data technologies. Strong understanding of distributed computing principles and data processing frameworks. Experience with data ingestion, transformation, and storage using PySpark. Solid grasp of SQL and NoSQL databases, including experience with data modeling and schema design. Additional Information: The candidate should have a minimum of 3 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bengaluru office. Qualifications A Engineering graduate preferably Computer Science graduate 15 years of full time education

Posted 1 month ago

Apply

12 - 17 years

14 - 19 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Apache Spark, Python (Programming Language), Google BigQuery Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. Your typical day will revolve around creating innovative solutions to address various business needs and ensuring seamless application functionality. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Expected to provide solutions to problems that apply across multiple teams Lead the team in implementing PySpark solutions effectively Conduct code reviews and ensure adherence to best practices Provide technical guidance and mentorship to junior team members Professional & Technical Skills: Must To Have Skills:Proficiency in PySpark, Python (Programming Language), Apache Spark, Google BigQuery Strong understanding of distributed computing and parallel processing Experience in optimizing PySpark jobs for performance Knowledge of data processing and transformation techniques Familiarity with cloud platforms for deploying PySpark applications Additional Information: The candidate should have a minimum of 12 years of experience in PySpark This position is based at our Gurugram office A 15 years full-time education is required Qualifications 15 years full time education

Posted 1 month ago

Apply

2 - 5 years

14 - 17 Lacs

Mumbai

Work from Office

Naukri logo

Who you are A seasoned Data Engineer with a passion for building and managing data pipelines in large-scale environments. Have good experience working with big data technologies, data integration frameworks, and cloud-based data platforms. Have a strong foundation in Apache Spark, PySpark, Kafka, and SQL.What you’ll doAs a Data Engineer – Data Platform Services, your responsibilities include: Data Ingestion & Processing Assisting in building and optimizing data pipelines for structured and unstructured data. Working with Kafka and Apache Spark to manage real-time and batch data ingestion. Supporting data integration using IBM CDC and Universal Data Mover (UDM). Big Data & Data Lakehouse Management Managing and processing large datasets using PySpark and Iceberg tables. Assisting in migrating data workloads from IIAS to Cloudera Data Lake. Supporting data lineage tracking and metadata management for compliance. Optimization & Performance Tuning Helping to optimize PySpark jobs for efficiency and scalability. Supporting data partitioning, indexing, and caching strategies. Monitoring and troubleshooting pipeline issues and performance bottlenecks. Security & Compliance Implementing role-based access controls (RBAC) and encryption policies. Supporting data security and compliance efforts using Thales CipherTrust. Ensuring data governance best practices are followed. Collaboration & Automation Working with Data Scientists, Analysts, and DevOps teams to enable seamless data access. Assisting in automation of data workflows using Apache Airflow. Supporting Denodo-based data virtualization for efficient data access. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks. Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics.

Posted 1 month ago

Apply

2 - 5 years

14 - 17 Lacs

Mumbai

Work from Office

Naukri logo

Who you are: A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks. Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics.

Posted 1 month ago

Apply

2 - 5 years

14 - 17 Lacs

Mumbai

Work from Office

Naukri logo

Who you areA Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks. Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics..

Posted 1 month ago

Apply

6 - 10 years

14 - 17 Lacs

Mumbai

Work from Office

Naukri logo

A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 6-10 years of experience in big data engineering, data processing, and distributed computing. Proficiency in Apache Spark, PySpark, Kafka, Iceberg, and Cloudera Data Platform (CDP). Strong programming skills in Python, Scala, and SQL. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Knowledge of data security, encryption, and compliance frameworks. Experience working with metadata management and data quality solutions. Preferred technical and professional experience Experience with data migration projects in the banking/financial sector. Knowledge of graph databases (DGraph Enterprise) and data virtualization (Denodo). Exposure to cloud-based data platforms (AWS, Azure, GCP). Familiarity with MLOps integration for AI-driven data processing. Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics. Architectural review and recommendations on the migration/transformation solutions. Experience working with Banking Data model. “Meghdoot” Cloud platform knowledge.

Posted 1 month ago

Apply

5 - 10 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Amazon Web Services (AWS) Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. Your typical day will revolve around creating innovative solutions to address various business needs and ensuring seamless application functionality. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead the development and implementation of complex applications Conduct code reviews and provide technical guidance to team members Stay updated with the latest technologies and trends in application development Professional & Technical Skills: Must To Have Skills: Proficiency in PySpark Strong understanding of distributed computing and big data processing Experience in building scalable and high-performance applications Knowledge of cloud platforms such as AWS or Azure Hands-on experience in data processing and analysis Additional Information: The candidate should have a minimum of 5 years of experience in PySpark This position is based at our Hyderabad office A 15 years full-time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

3 - 8 years

10 - 14 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Lead the design, development, and implementation of applications. Collaborate with cross-functional teams to gather and analyze requirements. Ensure the application meets quality standards and is delivered on time. Provide technical guidance and mentorship to junior team members. Stay updated with the latest industry trends and technologies. Identify and resolve any issues or bottlenecks in the application development process. Professional & Technical Skills: Must To Have Skills: Proficiency in Apache Spark. Strong understanding of distributed computing and parallel processing. Experience with big data processing frameworks like Hadoop or Apache Kafka. Hands-on experience with programming languages like Java or Scala. Knowledge of database systems and SQL. Good To Have Skills: Experience with cloud platforms like AWS or Azure. Additional Information: The candidate should have a minimum of 3 years of experience in Apache Spark. This position is based at our Chennai office. A 15 years full-time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

7 - 10 years

15 - 20 Lacs

Mumbai

Work from Office

Naukri logo

Position Overview: The Databricks Data Engineering Lead role is ideal a highly skilled Databricks Data Engineer who will architect and lead the implementation of scalable, high-performance data pipelines and platforms using the Databricks Lakehouse ecosystem. The role involves managing a team of data engineers, establishing best practices, and collaborating with cross-functional stakeholders to unlock advanced analytics, AI/ML, and real-time decision-making capabilities. Key Responsibilities: Lead t he design and development of modern data pipelines, data lakes, and lakehouse architectures using Databricks and Apache Spark. Manage and mentor a team of data engineers, providing technical leadership and fostering a culture of excellence. Architect scalable ETL/ELT workflows to process structured and unstructured data from various sources (cloud, on-prem, streaming). Build and maintain Delta Lake tables and optimize performance for analytics, machine learning, and BI use cases. Collaborate with data scientists, analysts, and business teams to deliver high-quality, trusted, and timely data products. Ensure best practices in data quality, governance, lineage, and security, including the use of Unity Catalog and access controls. Integrate Databricks with cloud platforms (AWS, Azure, or GCP) and data tools (Snowflake, Kafka, Tableau, Power BI, etc.). Implement CI/CD pipelines for data workflows using tools such as GitHub, Azure DevOps, or Jenkins. Stay current with Databricks innovations and provide recommendations on platform strategy and architecture improvements Qualifications: Education : Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. Experience : 7+ years of experience in data engineering, including 3+ years working with Databricks and Apache Spark . Proven leadership experience in managing and mentoring data engineering teams. Skills : Proficiency in PySpark, SQL, and experience with Delta Lake, Databricks Workflows, and MLflow. Strong understanding of data modeling, distributed computing, and performance tuning. Familiarity with one or more major cloud platforms (Azure, AWS, GCP) and cloud-native services. Experience implementing data governance and security in large-scale environments. Experience with real-time data processing using Structured Streaming or Kafka. Knowledge of data privacy, security frameworks, and compliance standards (e.g., PCIDSS, GDPR). Exposure to machine learning pipelines, notebooks, and ML Ops practices. Certifications : Databricks Certified Data Engineer or equivalent certification.

Posted 1 month ago

Apply

10 - 17 years

50 - 100 Lacs

Bengaluru

Work from Office

Naukri logo

Squarepoint is a global investment management firm that utilizes a diversified portfolio of systematic and quantitative strategies across financial markets that seeks to achieve high quality, uncorrelated returns for our clients. We have deep expertise in trading, technology and operations and attribute our success to rigorous scientific research. As a technology and data-driven firm, we design and build our own cutting-edge systems, from high performance trading platforms to large scale data analysis and compute farms. With offices around the globe, we emphasize true, global collaboration by aligning our investment, technology, and operations teams functionally around the world. Building on our quantitative research platform and process-driven approach, Squarepoint also runs discretionary strategies to augment our systematic approach and monetize opportunities which may not be suitable to be traded in a systematic strategy. Position Overview: We are seeking an experienced and passionate Software Developer to join our growing team. In this role, you will play a key part in designing, building, and maintaining Squarepoint internal productivity tools, frameworks, and platforms that power our business. You will have the opportunity to work with cutting-edge technologies and make a direct impact on the efficiency and productivity of both investment and technology teams within SquarePoint. Responsibilities: Design, develop, and maintain high-quality, scalable, and performant software solutions. Contribute to the development of companywide productivity tools, frameworks, and platforms that streamline operations across the organization. Work collaboratively with other developers and stakeholders to gather requirements, design solutions, and implement features. Write clean, well-documented, and testable code. Participate in code reviews and contribute to improving code quality and development processes. Troubleshoot and resolve technical issues in a timely and efficient manner. Stay up-to-date with the latest technologies and industry best practices. Requirements: 10+ years of professional software development experience. Strong proficiency in high performance Python, with a deep understanding of its ecosystems and best practices. Prior or current experience with at least one JVM-based language such as Java, Kotlin, or Scala. Solid understanding of distributed systems principles and experience working with distributed architectures. Experience with containerization technologies (e.g., Docker, Kubernetes). Experience working in a Linux environment, using version control Experience with CI/CD pipelines and automation tools.

Posted 1 month ago

Apply

6 - 10 years

10 - 15 Lacs

Mumbai, Hyderabad, Bengaluru

Work from Office

Naukri logo

Job Description About the Role: Were looking for a Senior Full Stack Developer to join our dynamic and fast-paced development team within Oracles Database Development Organization. This is a hands-on role where you will be responsible for designing, developing, and maintaining enterprise-grade applications and services. Youll work across the full technology stackfrom front-end UI to backend servicesand play a critical role in shaping product features and performance. Tech Stack: Frontend: JavaScript, KnockoutJS, Oracle APEX, Oracle Analytics Backend: Java, Spring Boot, Microservices DevOps & Platform: Kubernetes, Jenkins Database: Oracle, SQL, PL/SQL Scripting: Python (preferred) Key Responsibilities: Design, develop, and maintain scalable full stack applications using Java and JavaScript-based frameworks. Collaborate with product managers, UX designers, and other developers to translate requirements into technical solutions. Drive high standards in coding, testing, and quality assurance. Design and build microservices that integrate with Oracles core platforms. Deploy and manage services in a Kubernetes environment using CI/CD pipelines (Jenkins). Analyze and optimize performance bottlenecks in both frontend and backend layers. Write clean, maintainable code with a strong focus on performance and reliability. Mentor junior developers and contribute to overall team growth and best practices. Ideal Candidate Profile: 6+ years of experience in software engineering with a solid background in full stack development. Proficient in Java , Spring Boot , and building RESTful APIs . Hands-on experience with JavaScript frameworks, especially KnockoutJS , and familiarity with Oracle APEX and Analytics tools. Strong knowledge of Kubernetes , containerization, and CI/CD tools like Jenkins. Solid understanding of data structures , algorithms , and operating systems fundamentals. Expertise in SQL and PL/SQL with the ability to write optimized database queries. Experience with scripting languages like Python is a plus. A self-motivated problem solver with a collaborative spirit and willingness to learn new technologies. Top 3 Must-Have Skills: Strong software engineering background with deep Java expertise Experience designing, building, and deploying microservices in a production environment Solid understanding of system design and distributed computing principles Career Level - IC3 Responsibilities About the Role: Were looking for a Senior Full Stack Developer to join our dynamic and fast-paced development team within Oracles Database Development Organization. This is a hands-on role where you will be responsible for designing, developing, and maintaining enterprise-grade applications and services. Youll work across the full technology stackfrom front-end UI to backend servicesand play a critical role in shaping product features and performance. Tech Stack: Frontend: JavaScript, KnockoutJS, Oracle APEX, Oracle Analytics Backend: Java, Spring Boot, Microservices DevOps & Platform: Kubernetes, Jenkins Database: Oracle, SQL, PL/SQL Scripting: Python (preferred) Key Responsibilities: Design, develop, and maintain scalable full stack applications using Java and JavaScript-based frameworks. Collaborate with product managers, UX designers, and other developers to translate requirements into technical solutions. Drive high standards in coding, testing, and quality assurance. Design and build microservices that integrate with Oracles core platforms. Deploy and manage services in a Kubernetes environment using CI/CD pipelines (Jenkins). Analyze and optimize performance bottlenecks in both frontend and backend layers. Write clean, maintainable code with a strong focus on performance and reliability. Mentor junior developers and contribute to overall team growth and best practices. Ideal Candidate Profile: 6+ years of experience in software engineering with a solid background in full stack development. Proficient in Java , Spring Boot , and building RESTful APIs . Hands-on experience with JavaScript frameworks, especially KnockoutJS , and familiarity with Oracle APEX and Analytics tools. Strong knowledge of Kubernetes , containerization, and CI/CD tools like Jenkins. Solid understanding of data structures , algorithms , and operating systems fundamentals. Expertise in SQL and PL/SQL with the ability to write optimized database queries. Experience with scripting languages like Python is a plus. A self-motivated problem solver with a collaborative spirit and willingness to learn new technologies. Top 3 Must-Have Skills: Strong software engineering background with deep Java expertise Experience designing, building, and deploying microservices in a production environment Solid understanding of system design and distributed computing principles About Us Innovation starts with inclusion at Oracle. We are committed to creating a workplace where all kinds of people can be themselves and do their best work. Its when everyones voice is heard and valued, that we are inspired to go beyond whats been done before. Thats why we need people with diverse backgrounds, beliefs, and abilities to help us create the future, and are proud to be an affirmative-action equal opportunity employer. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, age, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 month ago

Apply

12 - 22 years

40 - 75 Lacs

Hyderabad

Work from Office

Naukri logo

you'llBachelor'swhat'sreliabilitydistributedthe reliabilityJob Description In the HPE Hybrid Cloud, we lead the innovation agenda and technology roadmap for all of HPE. This includes managing the design, development, and product portfolio of our next-generation cloud platform, Green Lake. Working with customers, we help them reimagine their information technology needs to deliver a simple, consumable solution that helps them drive their business results. Join us to redefine whats next for you. What youll do: Ability to design and develop testing/automation strategies Ability to build automation from the early phases of the product life cycle. Execute quality improvement testing and activities Ensure products meet customer expectations and demand Work closely with the development team and internal and external stakeholders to improve existing products Maintain standards for the eliability and performance of production What you need to bring: Master's/Bachelors degree in the computer science field preferred 12+ years of experience in system testing and automation Strong JAVA/Python programming skills Through knowledge of the Linux/UNIX environment Good debugging and problem-solving skills Expertise in File Systems, Networking, or Distributed storage/compute systems Communication skills and the ability to work independently

Posted 1 month ago

Apply

6 - 11 years

15 - 19 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description As Principal Development Engineer, you will be responsible for designing, building and operating applications, components and services that will range from Identity & access management, cloud services, distributed computing, micro services, storage replication to highly efficient data planes, to serve life science customers and advance patient care. You will have the opportunity to work on both architecturally broad and deep software systems engineering problems. You will own development of new components and features, from initial concepts through design, implementation, test, and operation. Your work will be used by some of the biggest companies in the world, impacting millions of patients in our goal to achieve better health outcomes for everyone. Responsibilities include: Work with cross-functional team members from Architecture, Product Management, QA, Support & Services, and other Central teams to architect, design & implement software & solutions. define and develop software for tasks associated with the developing, designing and debugging of software applications Collaborate with the global development & qa team to define & meet project milestones. Implement high quality code, review code written by your peers Write test automation for your code Share responsibility with other team members to deploy new code to production. Work with the team to operate services that you or your peers have developed. Qualifications: BS or MS degree in Computer Science, Computer Engineering or equivalent degree 7+ years experience in the design and implementation of complex software systems Proven experience with a major Object-Oriented Programming language such as Java or C++, Understanding of data structures and design patterns Experience with RESTful Web Services or cloud platforms such as OCI, AWS, Azure or Google Cloud Experience working with Dockers, Kafka, Zookeeper Aptitude for problem solving In-depth knowledge and/or Experience with Identity and access management concepts & tools Experience with massively scalable systems is a plus Familiarity with networking concepts like firewalls, VPNs and DNS is a plus Experience working with healthcare systems or medical data is a plus Career Level - IC4 Responsibilities Work with cross-functional team members from Architecture, Product Management, QA, Support & Services, and other Central teams to architect, design & implement software & solutions. define and develop software for tasks associated with the developing, designing and debugging of software applications Collaborate with the global development & qa team to define & meet project milestones. Implement high quality code, review code written by your peers Write test automation for your code Share responsibility with other team members to deploy new code to production. Work with the team to operate services that you or your peers have developed

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies