Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
4 - 7 Lacs
Bengaluru
Work from Office
We are looking for a Senior HPC Engineer to join our IT Infrastructure Engineering & Application (IE&A) group. You will play a critical role in managing high-performance compute environments, automating solutions, and supporting engineering teams across Arm. If you are passionate about HPC systems and want to work on global infrastructure projects, this is the role for you! Roles & Responsibilities: Administer and support IBM Spectrum LSF clusters Develop and maintain automation scripts using Bash, Shell, Python, or Perl Work with public cloud platforms such as AWS, GCP, or Azure Handle ticket-based support and proactively resolve infrastructure issues Collaborate on key engineering projects to enhance system resilience and security Mandatory Key Skills: 1. IBM Spectrum LSF Administration 2. Linux (RedHat) System Administration 3. Bash/Shell/Python/Perl Scripting 4. Public Cloud Platform (AWS/GCP/Azure) 5. Automation & DevOps Tools (Terraform/Ansible/CI/CD)
Posted 16 hours ago
8.0 - 13.0 years
40 - 80 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
Graduate with 8+ years in product / solution management Product Manager / Offering Manager in Cloud, IaaS, HPC, or related high-tech domains. Hands-on understanding of technologies like NVIDIA GPU stacks, containerized HPC (Singularity, Docker), scheduling systems (SLURM, PBS), Lustre / GPFS Familiarity with as-a-Service constructs, subscription models, and TCO discussions. Product lifecycle management. Project and cross-functional stakeholder management Strong articulation, documentation, and influencing ability Able to interact across sales, delivery, product, and finance Suitable candidates may forward their updated profiles in strict confidence to hr33@hectorandstreak.com or call on 9699224920
Posted 2 days ago
10.0 - 20.0 years
40 - 80 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
Bachelor's degree in Computer Science with 10+ years of experience with HPC environments Experience in HPC architecture and design, with a proven track record of delivering complex HPC solutions. Experience in designing and implementing HPC solutions on public cloud, private cloud, and on-premises infrastructure. Knowledge of HPC technologies, including M PI, OpenMP, Infiniband, GPFS, Lustre , and other file systems, cluster management tools such as Slurm , Torque, or LSF, and scheduling software such as PBSPro. Excellent communication skills, including the ability to communicate technical concepts to both technical and non-technical audiences. Experience with virtualization and containerization technologies such as Docker, Kubernetes, and Singularity. Strong understanding of networking technologies and protocols, including TCP/IP, Infiniband, and RDMA. Familiarity with one or more programming languages such as C, C++, Fortran, Python, or Java. Experience working in a multi-vendor, multi-cloud environment. Strong problem-solving skills and the ability to work under pressure in a fast-paced environment. Suitable candidates may forward their updated profiles in strict confidence to hr33@hectorandstreak.com or call on 9699224920
Posted 2 days ago
4.0 - 8.0 years
6 - 8 Lacs
Hyderabad, Telangana, India
On-site
Let's change the world! In this vital role, you'll be responsible for deploying, maintaining, and supporting High-Performance Computing (HPC) infrastructure across a multi-cloud environment. We're looking for hands-on engineering expertise, with deep technical knowledge in HPC technology and industry best practices. Roles & Responsibilities Implement and manage cloud-based infrastructure that supports HPC environments for data science (e.g., AI/ML workflows, Image Analysis). Collaborate with data scientists and ML engineers to deploy scalable machine learning models into production . Ensure the security, scalability, and reliability of HPC systems in the cloud. Optimize cloud resources for cost-effective and efficient use . Stay ahead of the curve with the latest in cloud services and industry standard processes . Provide technical leadership and guidance in cloud and HPC systems management . Develop and maintain CI/CD pipelines for deploying resources to multi-cloud environments. Monitor and fix cluster operations/applications and cloud environments. Document system design and operational procedures. Must-Have Skills Expert with Linux/Unix system administration (RHEL, CentOS, Ubuntu, etc.). Proficiency with job scheduling and resource management tools (SLURM, PBS, LSF, etc.). Good understanding of parallel computing, MPI, OpenMP, and GPU acceleration (CUDA, ROCm). Knowledge of storage architectures and distributed file systems (Lustre, GPFS, Ceph). Experience with containerization technologies (Singularity, Docker) and cloud-based HPC solutions. Expert in scripting languages (Python, Bash) and containerization technologies (Docker, Kubernetes). Familiarity with automation tools (Ansible, Puppet, Chef) for system provisioning and maintenance. Understanding of networking protocols, high-speed interconnects, and security best practices . Demonstrable experience in cloud computing (AWS, Azure, GCP) and cloud architecture. Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation and Git. What We Expect of You We are all different, yet we all use our unique contributions to serve patients. We're looking for a professional with expert knowledge in large Linux environments, networking, storage, and cloud-related technologies . You'll also need expertise in root-cause analysis and fixing issues while working with a team and stakeholders. Top-level communication and documentation skills are required. Expertise in coding in Python, Bash, and YAML is expected. Good-to-Have Skills Experience with Kubernetes (EKS) and service mesh architectures. Knowledge of AWS Lambda and event-driven architectures . Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP). Basic Qualifications Bachelor's degree in computer science, IT, or a related field with 6-8 years of hands-on HPC administration or a related field. Professional Certifications (Preferred) Red Hat Certified Engineer (RHCE) or Linux Professional Institute Certification (LPIC) AWS Certified Solutions Architect - Associate or Professional Soft Skills Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams. Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment.
Posted 4 days ago
4.0 - 12.0 years
4 - 12 Lacs
Hyderabad, Telangana, India
On-site
This role is all about the design, integration, and management of High-Performance Computing (HPC) systems . This includes both hardware and software, seamlessly integrated into our organization's network infrastructure. You'll be responsible for all activities related to supporting our business and platforms, from system administration to incorporating new technologies in a constantly evolving landscape. Your ultimate goal is to ensure all parts of the system work together to meet our organization's needs. Roles & Responsibilities Implement and manage cloud-based infrastructure that supports HPC environments for data science (e.g., AI/ML workflows, Image Analysis). Collaborate with data scientists and ML engineers to deploy scalable machine learning models into production . Ensure the security, scalability, and reliability of HPC systems in the cloud. Optimize cloud resources for cost-effective and efficient use . Keep abreast of the latest in cloud services and industry standard processes . Provide technical leadership and guidance in cloud and HPC systems management. Develop and maintain CI/CD pipelines for deploying resources to multi-cloud environments. Monitor and fix cluster operations/applications and cloud environments. Document system design and operational procedures. What We Expect of You We are all different, yet we all use our unique contributions to serve patients. The proactive and technically adept professional we seek has these qualifications. Basic Qualifications Master's degree with 4-6 years of experience in Computer Science, IT, or a related field with hands-on HPC administration OR Bachelor's degree with 6-8 years of experience in Computer Science, IT, or a related field with hands-on HPC administration OR Diploma with 10-12 years of experience in Computer Science, IT, or a related field with hands-on HPC administration Demonstrable experience in cloud computing (preferably AWS) and cloud architecture. Experience with containerization technologies (Singularity, Docker) and cloud-based HPC solutions. Experience with infrastructure-as-code (IaC) tools such as Terraform, CloudFormation, Packer, Ansible, and Git. Expert with scripting ( Python or Bash ) and Linux/Unix system administration (preferably Red Hat or Ubuntu). Proficiency with job scheduling and resource management tools (SLURM, PBS, LSF, etc.). Knowledge of storage architectures and distributed file systems (Lustre, GPFS, Ceph). Understanding of networking architecture and security best practices . Preferred Qualifications Experience supporting research in healthcare life sciences . Experience with Kubernetes (EKS) and service mesh architectures. Knowledge of AWS Lambda and event-driven architectures. Exposure to multi-cloud environments (Azure, GCP). Familiarity with machine learning frameworks (TensorFlow, PyTorch) and data pipelines. Certifications in cloud architecture (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.). Experience in an Agile development environment . Prior work with distributed computing and big data technologies (Hadoop, Spark). Professional Certifications (Preferred) Red Hat Certified Engineer (RHCE) or Linux Professional Institute Certification (LPIC) AWS Certified Solutions Architect - Associate or Professional Soft Skills Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams. Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment.
Posted 4 days ago
6.0 - 8.0 years
6 - 8 Lacs
Hyderabad, Telangana, India
On-site
In this vital role you will be responsible for deploying, maintaining and supporting HPC infrastructure in a multi-cloud environment. Hands-on engineering which requires deep technical expertise in HPC technology and standard methodologies. Implement, and manage cloud-based infrastructure that supports HPC environments that support data science (e.g. AI/ML workflows, Image Analysis). Collaborate with data scientists and ML engineers to deploy scalable machine learning models into production. Ensure the security, scalability, and reliability of HPC systems in the cloud. Optimize cloud resources for cost-effective and efficient use. Stay ahead of with the latest in cloud services and industry standard processes. Provide technical leadership and guidance in cloud and HPC systems management. Develop and maintain CI/CD pipelines for deploying resources to multi-cloud environments. Monitor and fix cluster operations/applications and cloud environments. Document system design and operational procedures. Must-Have Skills: Expert with Linux/Unix system administration (RHEL, CentOS, Ubuntu, etc.). Proficiency with job scheduling and resource management tools (SLURM, PBS, LSF, etc.). Good understanding of parallel computing, MPI, OpenMP, and GPU acceleration (CUDA, ROCm). Knowledge of storage architectures and distributed file systems (Lustre, GPFS, Ceph). Experience with containerization technologies (Singularity, Docker) and cloud-based HPC solutions. Expert in scripting languages (Python, Bash) and containerization technologies (Docker, Kubernetes). Familiarity with automation tools (Ansible, Puppet, Chef) for system provisioning and maintenance. Understanding of networking protocols, high-speed interconnects, and security best practices. Demonstrable experience in cloud computing (AWS, Azure, GCP) and cloud architecture. Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation and Git. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Expert knowledge in large Linux environments, networking, storage, and cloud related technologies . Also, the candidate will have expertise in root-cause analysis and fix while working with a team and stakeholders. Top-level communication and documentation skills are required. Expertise in coding in Python, Bash, YAML is expected. Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures . Knowledge of AWS Lambda and event-driven architectures . Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) . Basic Qualifications: Bachelors degree in computer science, IT, or related field with 6-8 years of hands-on HPC administration or a related field Additional Skills : Experience supporting research in healthcare life sciences. Deep, extensive experience with High Performance Computing (HPC) and cluster management Familiarity with machine learning frameworks (TensorFlow, PyTorch) and data pipelines. Certifications in cloud architecture (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.). Experience in an Agile development environment. Prior work with distributed computing and big data technologies (Hadoop, Spark). Professional Certifications (preferred):Red Hat Certified Engineer (RHCE) or Linux Professional Institute Certification (LPIC) AWS Certified Solutions Architect - Associate or Professional Preferred Qualifications: Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment.
Posted 4 days ago
3.0 - 12.0 years
3 - 12 Lacs
Hyderabad, Telangana, India
On-site
What you will do Let s do this. Let s change the world. In this vital role, you will primarily focus on analyzing scientific requirements from Global Research and translating them into efficient and effective information systems solutions. As a domain expert, the prospective BA collaborate with cross-functional teams to identify data product enhancement opportunities, perform data analysis, solve issues, and support system implementation and maintenance. Additionally, it will involve development of data product launch and user adoption strategy of Amgen Research Foundational Data Systems. Your expertise in business process analysis and technology will contribute to the successful delivery of IT solutions that drive operational efficiency and meet business objectives. Collaborate with geographically dispersed teams, including those in the US, EU and other international locations. Partner and ensure alignment of the Amgen India DTI site leadership and follow global standards and practices. Foster a culture of collaboration, innovation, and continuous improvement. Function as a Scientific Business Analyst, providing domain expertise for Research Data and Analytics within a Scaled Agile Framework (SAFe) product team Serve as Agile team scrum master or project manager as needed Serve as a liaison between global DTI functional areas and global research scientists, prioritizing their needs and expectations Create functional analytics dashboards and fit-for-purposes applications for quantitative research, scientific analysis and business intelligence (Databricks, Spotfire, Tableau, Dash, Streamlit, RShiny) Handle a suite of custom internal platforms, commercial off-the-shelf (COTS) software, and systems integrations Translate complex scientific and technological needs into clear, actionable requirements for development teams Develop and maintain release deliverables that clearly outlines the planned features and enhancements, timelines, and milestones Identify and handle risks associated with the systems, including technological risks, scientific validation, and user acceptance Develop documentations, communication plans and training plans for end users Ensure scientific data operations are scoped into building Research-wide Artificial Intelligence/Machine Learning capabilities Ensure operational excellence, cybersecurity and compliance. What we expect of you We are all different, yet we all use our unique contributions to serve patients. This role requires expertise in biopharma scientific domains as well as informatics solution delivery. Additionally, extensive collaboration with global teams is required to ensure seamless integration and operational excellence. The ideal candidate will have a solid background in the end-to-end software development lifecycle and be a Scaled Agile practitioner, coupled with change management and transformation experience. This role demands the ability to deliver against key organizational strategic initiatives, develop a collaborative environment, and deliver high-quality results in a matrixed organizational structure. Basic Qualifications/Skills: Doctorate degree OR Master s degree and 4 to 6 years of Life Science / Biotechnology / Pharmacology / Information Systems experience OR Bachelor s degree and 6 to 8 years of Life Science / Biotechnology / Pharmacology / Information Systems experience OR Diploma and 10 to 12 years of Life Science / Biotechnology / Pharmacology / Information Systems experience Excellent problem-solving skills and a passion for solving complex challenges in drug discovery with technology and data Superb communication skills and experience creating impactful slide decks with data Collaborative spirit and effective communication skills to work seamlessly in a multi-functional team Familiarity with data analytics and scientific computing platforms such as Databricks, Dash, Streamlit, RShiny, Spotfire, Tableau and related programming languages like SQL, python, R Preferred Qualifications/Skills: BS, MS or PhD in Bioinformatics, Computational Biology, Computational Chemistry, Life Sciences, Computer Science or Engineering 3+ years of experience in implementing and supporting biopharma scientific research data analytics Demonstrated expertise in a scientific domain area and related technology needs Understanding of semantics and FAIR (Findability, Accessibility Interoperability and Reuse) data concepts Understanding of scientific data strategy, data governance, data infrastructure Experience with cloud (e. g. AWS) and on-premise compute infrastructure Familiarity with advanced analytics, AI/ML and scientific computing infrastructure, such as High Performance Compute (HPC) environments and clusters (e. g SLURM, Kubernetes) Experience with scientific and technical team collaborations, ensuring seamless coordination across teams and driving the successful delivery of technical projects Ability to deliver features meeting research user demands using Agile methodology An ongoing commitment to learning and staying at the forefront of AI/ML advancements. We understand that to successfully sustain and grow as a global enterprise and deliver for patients we must ensure a diverse and inclusive work environment. Professional Certifications: SAFe for Teams certification (preferred) SAFe Scrum Master or similar (preferred) Soft Skills: Strong transformation and change management experience. Exceptional collaboration and communication skills. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented with a focus on achieving team goals. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.
Posted 4 days ago
4.0 - 6.0 years
4 - 6 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
The role is responsible for the design, integration, and management of high performance computing (HPC) systems that encompass both hardware and software components into the organization's network infrastructure. This individual will be responsible for all activities related to handling and supporting the Business and platforms including system administration, as well as incorporating new technologies under the challenge of a sophisticated and constantly evolving technology landscape. This role involves ensuring that all parts of a system work together seamlessly to meet the organization's requirements. Roles & Responsibilities: Implement and manage cloud-based infrastructure that supports HPC environments that support data science (e.g. AI/ML workflows, Image Analysis) Collaborate with data scientists and ML engineers to deploy scalable machine learning models into production Ensure the security, scalability, and reliability of HPC systems in the cloud Optimize cloud resources for cost-effective and efficient use Keep abreast of the latest in cloud services and industry standard processes Provide technical leadership and guidance in cloud and HPC systems management Develop and maintain CI/CD pipelines for deploying resources to multi-cloud environments Monitor and fix cluster operations/applications and cloud environments Document system design and operational procedures Basic Qualifications: Masters degree with a 46 years of experience in Computer Science, IT or related field with hands-on HPC administration OR Bachelors degree with 68 years of experience in Computer Science, IT or related field with hands-on HPC administration OR Diploma with 1012 years of experience in Computer Science, IT or related field with hands-on HPC administration Demonstrable experience in cloud computing (preferably AWS) and cloud architecture Experience with containerization technologies (Singularity, Docker) and cloud-based HPC solutions Experience with infrastructure-as-code (IaC) tools such as Terraform, CloudFormation, Packer, Ansible and Git Expert with scripting (Python or Bash) and Linux/Unix system administration (preferably Red Hat or Ubuntu) Proficiency with job scheduling and resource management tools (SLURM, PBS, LSF, etc.) Knowledge of storage architectures and distributed file systems (Lustre, GPFS, Ceph) Understanding of networking architecture and security best practices Preferred Qualifications: Experience supporting research in healthcare life sciences Experience with Kubernetes (EKS) and service mesh architectures Knowledge of AWS Lambda and event-driven architectures Exposure to multi-cloud environments (Azure, GCP) Familiarity with machine learning frameworks (TensorFlow, PyTorch) and data pipelines Certifications in cloud architecture (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.) Experience in an Agile development environment Prior work with distributed computing and big data technologies (Hadoop, Spark) Professional Certifications (please mention if the certification is preferred or mandatory for the role): Red Hat Certified Engineer (RHCE) or Linux Professional Institute Certification (LPIC) AWS Certified Solutions Architect Associate or Professional Soft Skills: Strong analytical and problem-solving skills Ability to work effectively with global, virtual teams Effective communication and collaboration with cross-functional teams Ability to work in a fast-paced, cloud-first environment
Posted 2 weeks ago
5.0 - 8.0 years
5 - 8 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities: Data Pipeline Development : Assist in the design and implementation of data pipelines to extract, transform, and load (ETL) data from various sources into data warehouses or databases. Data Quality Assurance : Monitor and ensure the quality and integrity of data throughout the data lifecycle, identifying and resolving any data discrepancies or issues. Collaboration & Analysis : Work closely with data analysts, data scientists, and other stakeholders to understand data requirements and deliver solutions that meet business needs as well as perform analyses aligned to anchor domain. Documentation : Maintain clear and comprehensive documentation of data processes, pipeline architectures, and data models for reference and training purposes. Performance Optimization : Help optimize data processing workflows and improve the efficiency of existing data pipelines. Support Data Infrastructure : Assist in the maintenance and monitoring of data infrastructure, ensuring systems are running smoothly and efficiently. Learning and Development : Stay updated on industry trends and best practices in data engineering, actively seeking opportunities to learn and grow in the field. Qualifications Education: Bachelor's degree in Computer Science, Data Science, Information Technology, or a related field preferred; relevant coursework or certifications in data engineering or programming is a plus. Technical Skills: Familiarity with programming languages such as Python or JavaScript; knowledge of SQL and experience with databases (e.g., Snowflake, MySQL, or PostgreSQL) is preferred. Data Tools: Exposure to data processing frameworks and tools (e.g. PBS/Torque, Slurm, or Airflow) is a plus. Analytical Skills: Strong analytical and problem-solving skills, with a keen attention to detail. Communication Skills: Excellent verbal and written communication skills, with the ability to convey technical information clearly to non-technical stakeholders. Team Player: Ability to work collaboratively in a team environment and contribute to group projects. Adaptability: Willingness to learn new technologies and adapt to changing priorities in a fast-paced environment.
Posted 2 weeks ago
4.0 - 9.0 years
4 - 9 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Roles & Responsibilities: Collaborate with geographically dispersed teams, including those in the US, EU and other international locations. Partner and ensure alignment of the Amgen India DTI site leadership and follow global standards and practices. Foster a culture of collaboration, innovation, and continuous improvement. Function as a Scientific Business Analyst, providing domain expertise for Research Data and Analytics within a Scaled Agile Framework (SAFe) product team Serve as Agile team scrum master or project manager as needed Serve as a liaison between global DTI functional areas and global research scientists, prioritizing their needs and expectations Create functional analytics dashboards and fit-for-purposes applications for quantitative research, scientific analysis and business intelligence (Databricks, Spotfire, Tableau, Dash, Streamlit, RShiny) Manage a suite of custom internal platforms, commercial off-the-shelf (COTS) software, and systems integrations Translate complex scientific and technological needs into clear, actionable requirements for development teams Develop and maintain release deliverables that clearly outlines the planned features and enhancements, timelines, and milestones Identify and manage risks associated with the systems, including technological risks, scientific validation, and user acceptance Develop documentations, communication plans and training plans for end users Ensure scientific data operations are scoped into building Research-wide Artificial Intelligence/Machine Learning capabilities Ensure operational excellence, cybersecurity and compliance. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree OR Masters degree and 4 to 6 years of Life Science/Biotechnology/Pharmacology/Information Systems experience OR Bachelors degree and 6 to 8 years of Life Science/Biotechnology/Pharmacology/Information Systems experience OR Diploma and 10 to 12 years of Life Science/Biotechnology/Pharmacology/Information Systems experience Preferred Qualifications: BS, MS or PhD in Bioinformatics, Computational Biology, Computational Chemistry, Life Sciences, Computer Science or Engineering 3+ years of experience in implementing and supporting biopharma scientific research data analytics Functional Skills: Must-Have Skills: Excellent problem-solving skills and a passion for tackling complex challenges in drug discovery with technology and data Excellent communication skills and experience creating impactful slide decks with data Collaborative spirit and effective communication skills to work seamlessly in a cross-functional team Familiarity with data analytics and scientific computing platforms such as Databricks, Dash, Streamlit, RShiny, Spotfire, Tableau and related programming languages like SQL, python, R. Good-to-Have Skills: Demonstrated expertise in a scientific domain area and related technology needs Understanding of semantics and FAIR (Findability, Accessibility Interoperability and Reuse) data concepts Understanding of scientific data strategy, data governance, data infrastructure Experience with cloud (e.g. AWS) and on-premise compute infrastructure Familiarity with advanced analytics, AI/ML and scientific computing infrastructure, such as High Performance Compute (HPC) environments and clusters (e.g SLURM, Kubernetes) Experience with scientific and technical team collaborations, ensuring seamless coordination across teams and driving the successful delivery of technical projects Ability to deliver features meeting research user demands using Agile methodology An ongoing commitment to learning and staying at the forefront of AI/ML advancements. We understand that to successfully sustain and grow as a global enterprise and deliver for patients we must ensure a diverse and inclusive work environment. Professional Certifications SAFe for Teams certification (preferred) SAFe Scrum Master or similar (preferred) Soft Skills: Strong transformation and change management experience. Exceptional collaboration and communication skills. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented with a focus on achieving team goals. Strong presentation and public speaking skills.
Posted 3 weeks ago
4.0 - 6.0 years
10 - 12 Lacs
Hyderabad
Work from Office
Seeking a Senior HPC Administrator to manage and optimize high-performance computing systems. Required Candidate profile Notice Period : Immediate or 30 days max Responsibilities include cluster management, performance tuning, and user support. Requires 5+ years' experience with HPC, Linux, and job schedulers.
Posted 1 month ago
2.0 - 4.0 years
3 - 6 Lacs
Mumbai, Hyderabad, Bengaluru
Work from Office
Hiring an HPC Administrator to manage and support high-performance computing systems. Responsibilities include cluster setup, maintenance, monitoring, and user support. Requires 3+ years' experience with HPC environments, Linux, and schedulers. Required Candidate profile Notice Period : Immediate or 30 days max
Posted 1 month ago
5 - 10 years
15 - 30 Lacs
Bengaluru
Work from Office
Design and manage HPC infrastructure for geophysics, simulation, ML/AI using Azure and Linux. Optimize compute environments and support job schedulers, file systems, and parallel processing workflows. Required Candidate profile Experienced HPC engineer with 5–10 years in Linux, Azure, job schedulers, and supporting scientific workloads in a large-scale enterprise environment.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6462 Jobs | Ahmedabad
Amazon
6351 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane