Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
8 - 30 Lacs
Bhubaneswar, Odisha, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 1 week ago
3.0 years
8 - 30 Lacs
Bengaluru, Karnataka, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 1 week ago
3.0 years
8 - 30 Lacs
Hyderabad, Telangana, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 1 week ago
3.0 years
8 - 30 Lacs
Kochi, Kerala, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 1 week ago
3.0 years
8 - 30 Lacs
Pune, Maharashtra, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 1 week ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description AWS Cloud Infrastructure : Design, deploy, and manage scalable, secure, and highly available systems on AWS. Optimize cloud costs, enforce tagging, and implement security best practices (IAM, VPC, GuardDuty, etc.). Automate infrastructure provisioning using Terraform or AWS CDK. Ensure backup, disaster recovery, and high availability (HA) strategies are in place. Kubernetes (EKS preferred) : Manage and scale Kubernetes clusters (preferably Amazon EKS). Implement CI/CD pipelines with GitOps (e.g., ArgoCD or Flux) or traditional tools (e.g., Jenkins, GitLab). Enforce RBAC policies, namespaces isolation, and pod security policies. Monitor cluster health, optimize pod scheduling, autoscaling, and resource limits/requests. Monitoring and Observability (Datadog) : Build and maintain Datadog dashboards for real-time visibility across systems and services. Set up alerting policies, SLOs, SLIs, and incident response workflows. Integrate Datadog with AWS, Kubernetes, and applications for full-stack observability. Conduct post-incident reviews using Datadog analytics to reduce MTTR. Automation and DevOps : Automate manual processes (e.g., server setup, patching, scaling) using Python, Bash, or Ansible. Maintain and improve CI/CD pipelines (Jenkins) for faster and more reliable deployments. Drive Infrastructure-as-Code (IaC) practices using Terraform to manage cloud resources. Promote GitOps and version-controlled deployments. Linux Systems Administration : Administer Linux servers (Ubuntu, RHEL, Amazon Linux) for stability and performance. Harden OS security, configure SELinux, firewalls, and ensure timely patching. Troubleshoot system-level issues: disk, memory, network, and processes. Optimize system performance using tools like top, htop, iotop, netstat, etc. (ref:hirist.tech)
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking skilled and dedicated Lead AWS Cloud & DevOps Engineers to join our teams across multiple locations in India. In this role, you will contribute to critical client applications and internal projects, serving as an AWS Systems Engineer specializing in Infrastructure as Code, Containers, or DevOps solutions. Responsibilities Provide fault-tolerance, high-availability, scalability, and security on AWS infrastructure and platforms Build and implement CI/CD pipelines with automated systems for testing and deployment Facilitate production deployment utilizing various deployment strategies such as Blue-Green and Canary Automate AWS infrastructure and platform provisioning using Infrastructure as Code solutions Configure systems using management tools for automation Optimize AWS network architecture including VPCs, subnets, routers, and transit gateways Troubleshoot and monitor cloud performance with observability services like CloudWatch and VPC Flow Logs Establish security frameworks within AWS environments using IAM roles, WAFs, and CloudTrail Develop scripts in Linux, Python, or PowerShell to streamline operational tasks Requirements Minimum 8 years of experience, including 5 years in cloud and DevOps roles Production experience in Linux/Windows system engineering Proficiency with AWS Compute services: EC2, Lambda, Autoscaling, and Load Balancers Competency in AWS Storage services: S3, EFS, EBS, and archival options like Glacier Expertise in AWS Security services: IAM, KMS encryption, and monitoring tools like AWS Config Familiarity with AWS Networking services: VPC setups, VPN usage, and endpoints Hands-on with observability tools such as CloudWatch Alarms, ECS/EKS Monitoring, and VPC Flow Logs Skills in automation with orchestration tools like Terraform or AWS CloudFormation Expertise in container orchestration using Docker and platforms such as Kubernetes on EKS/ECS Flexibility to implement various deployment strategies including in-place modifications and blue-green approaches
Posted 1 week ago
8.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are seeking skilled and dedicated Lead AWS Cloud & DevOps Engineers to join our teams across multiple locations in India. In this role, you will contribute to critical client applications and internal projects, serving as an AWS Systems Engineer specializing in Infrastructure as Code, Containers, or DevOps solutions. Responsibilities Provide fault-tolerance, high-availability, scalability, and security on AWS infrastructure and platforms Build and implement CI/CD pipelines with automated systems for testing and deployment Facilitate production deployment utilizing various deployment strategies such as Blue-Green and Canary Automate AWS infrastructure and platform provisioning using Infrastructure as Code solutions Configure systems using management tools for automation Optimize AWS network architecture including VPCs, subnets, routers, and transit gateways Troubleshoot and monitor cloud performance with observability services like CloudWatch and VPC Flow Logs Establish security frameworks within AWS environments using IAM roles, WAFs, and CloudTrail Develop scripts in Linux, Python, or PowerShell to streamline operational tasks Requirements Minimum 8 years of experience, including 5 years in cloud and DevOps roles Production experience in Linux/Windows system engineering Proficiency with AWS Compute services: EC2, Lambda, Autoscaling, and Load Balancers Competency in AWS Storage services: S3, EFS, EBS, and archival options like Glacier Expertise in AWS Security services: IAM, KMS encryption, and monitoring tools like AWS Config Familiarity with AWS Networking services: VPC setups, VPN usage, and endpoints Hands-on with observability tools such as CloudWatch Alarms, ECS/EKS Monitoring, and VPC Flow Logs Skills in automation with orchestration tools like Terraform or AWS CloudFormation Expertise in container orchestration using Docker and platforms such as Kubernetes on EKS/ECS Flexibility to implement various deployment strategies including in-place modifications and blue-green approaches
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking skilled and dedicated Lead AWS Cloud & DevOps Engineers to join our teams across multiple locations in India. In this role, you will contribute to critical client applications and internal projects, serving as an AWS Systems Engineer specializing in Infrastructure as Code, Containers, or DevOps solutions. Responsibilities Provide fault-tolerance, high-availability, scalability, and security on AWS infrastructure and platforms Build and implement CI/CD pipelines with automated systems for testing and deployment Facilitate production deployment utilizing various deployment strategies such as Blue-Green and Canary Automate AWS infrastructure and platform provisioning using Infrastructure as Code solutions Configure systems using management tools for automation Optimize AWS network architecture including VPCs, subnets, routers, and transit gateways Troubleshoot and monitor cloud performance with observability services like CloudWatch and VPC Flow Logs Establish security frameworks within AWS environments using IAM roles, WAFs, and CloudTrail Develop scripts in Linux, Python, or PowerShell to streamline operational tasks Requirements Minimum 8 years of experience, including 5 years in cloud and DevOps roles Production experience in Linux/Windows system engineering Proficiency with AWS Compute services: EC2, Lambda, Autoscaling, and Load Balancers Competency in AWS Storage services: S3, EFS, EBS, and archival options like Glacier Expertise in AWS Security services: IAM, KMS encryption, and monitoring tools like AWS Config Familiarity with AWS Networking services: VPC setups, VPN usage, and endpoints Hands-on with observability tools such as CloudWatch Alarms, ECS/EKS Monitoring, and VPC Flow Logs Skills in automation with orchestration tools like Terraform or AWS CloudFormation Expertise in container orchestration using Docker and platforms such as Kubernetes on EKS/ECS Flexibility to implement various deployment strategies including in-place modifications and blue-green approaches
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: To be successful in this role, you should meet the following requirements: In this role, you will anchor performance Engineering and testing efforts across Banking engineering disciplines and provide ongoing input into the overall process improvement of the Performance Engineering discipline within Digital / Channels Transformation Opportunity to build Performance assurance procedures with the latest feasible tools and techniques, establish Performance test automation process to improve testing productivity. You are expected to support multiple Cloud (AWS) migration and Production initiatives using a wide range of tools and utilities, you identify performance related issues in the applications and systems and present your findings to other teams in the organization to ensure system reliability. Represent testing at Scrum meetings and all other key project meetings and provide a single point of accountability and escalation for testing within the scrum teams Advise on needed infrastructure and Performance Engineering and testing guidelines & be responsible for performance risk assessment of various platform features This is a largely cross-functional opportunity working with software product, development and support teams, capable of handling tasks to accelerate the testing delivery and to improve the quality for Applications at HSBC Work across all global activities and support the Performance Engineering team in ensuring any testing-related dependencies/ touchpoints are in place. You will be a Performance Engineering SME, as a result, you will have exposure to a broader set of problems, understanding customer experience, migrations, new cloud initiatives, improving platform performance, Optimize environments. Establish effective working relationships across other areas of HSBC, e.g. Business Product Owner, Digital Delivery Team, Transformation, and IT Provide recommendations to the Product Owner and/or other project stakeholders on the product readiness to go live. Requirements To be successful in this role, you should meet the following requirements: Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Terraform Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications Performance engineering toolsets such as JMeter, LoadRunner,Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analysing skills Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability Performance Testing and Engineering activity planning, estimating, designing, executing and analysing output from performance tests Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function Jenkins and CI-CD Pipelines including Pipeline scripting Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. Tools like GitHub, Jira & Confluence Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken Strong stakeholder management and excellent communication skills. Extensive knowledge of risk management and mitigation Strong analytical and problem-solving skills You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Anchor performance Engineering and testing efforts across Banking engineering disciplines and provide ongoing input into the overall process improvement of the Performance Engineering discipline within Digital / Channels Transformation Opportunity to build Performance assurance procedures with the latest feasible tools and techniques, establish Performance test automation process to improve testing productivity. You are expected to support multiple Cloud (AWS) migration and Production initiatives using a wide range of tools and utilities, you identify performance related issues in the applications and systems and present your findings to other teams in the organization to ensure system reliability. Represent testing at Scrum meetings and all other key project meetings and provide a single point of accountability and escalation for testing within the scrum teams Advise on needed infrastructure and Performance Engineering and testing guidelines & be responsible for performance risk assessment of various platform features This is a largely cross-functional opportunity working with software product, development and support teams, capable of handling tasks to accelerate the testing delivery and to improve the quality for Applications at HSBC Work across all global activities and support the Performance Engineering team in ensuring any testing-related dependencies/ touchpoints are in place. You will be a Performance Engineering SME, as a result, you will have exposure to a broader set of problems, understanding customer experience, migrations, new cloud initiatives, improving platform performance, Optimize environments. Establish effective working relationships across other areas of HSBC, e.g. Business Product Owner, Digital Delivery Team, Transformation, and IT Provide recommendations to the Product Owner and/or other project stakeholders on the product readiness to go live. Strong sense of ownership and accountability for quality deliverables and Performance engineering-related activities within the agile development lifecycle. Design and implement solutions to evaluate and improve performance and scalability of Web Apps / Platforms and platform level applications Represent Performance Engineering across the project and be accountable for defining, shaping and agreeing on the testing schedules in the context of the whole project schedules Accountable for the successful launch of scalable Platform Products, Engineering Initiatives, alignment for Cloud deployments Ability to resolve Performance testing related impediments together with the scrum team/ pods Provide technical expertise in performance requirements analysis, design, effort estimation, testing and delivery of scalable solutions Ability to engage with senior stakeholders and be able to build effective relationships, trust and understanding through the management of testing and the related risks Participate in design and architectural review of the Engineering eco-system to voice performance and scalability concerns Active contribution to evolving the overall Digital/ Channels Transformation’s Performance Engineering and test strategy Develop tools and processes to performance test software applications using various industry-standard tools to automate simulation of expected user workloads to identify performance bottlenecks with the usage of monitoring tools Execute and Analyze test results and establish reliable statistical models for response time, throughput, network utilization and other application performance metrics Ability to build relationships and successful teams located in other geographies and deliver maximum productivity Identify bottlenecks in the hardware and software platform, application code stack, network and measure and document reliable predictions on potential bottlenecks as computing platforms and workloads change Participate and support in design and evaluation of new tools, frameworks, techniques to enhance system performance, scalability and stability techniques Product focused mindset and detailed root cause analysis of test failures, and performance and scalability issues. Identify gaps, issues, or other areas of concern, and proactively define, propose, and enact process and workflow improvements to mitigate such issues. Experience in collaborating with Development, SRE, Prod support teams in evaluating performance issues and solutions for the infrastructure of the entire HSBC Digital stack Requirements To be successful in this role, you should meet the following requirements: Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Terraform Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications Performance engineering toolsets such as JMeter, LoadRunner,Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analyzing skills Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability Performance Testing and Engineering activity planning, estimating, designing, executing and analyzing output from performance tests Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function Jenkins and CI-CD Pipelines including Pipeline scripting Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. Tools like GitHub, Jira & Confluence Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken Strong stakeholder management and excellent communication skills. Extensive knowledge of risk management and mitigation Strong analytical and problem-solving skills You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
8.0 years
6 - 8 Lacs
Hyderābād
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking skilled and dedicated Lead AWS Cloud & DevOps Engineers to join our teams across multiple locations in India. In this role, you will contribute to critical client applications and internal projects, serving as an AWS Systems Engineer specializing in Infrastructure as Code, Containers, or DevOps solutions. Responsibilities Provide fault-tolerance, high-availability, scalability, and security on AWS infrastructure and platforms Build and implement CI/CD pipelines with automated systems for testing and deployment Facilitate production deployment utilizing various deployment strategies such as Blue-Green and Canary Automate AWS infrastructure and platform provisioning using Infrastructure as Code solutions Configure systems using management tools for automation Optimize AWS network architecture including VPCs, subnets, routers, and transit gateways Troubleshoot and monitor cloud performance with observability services like CloudWatch and VPC Flow Logs Establish security frameworks within AWS environments using IAM roles, WAFs, and CloudTrail Develop scripts in Linux, Python, or PowerShell to streamline operational tasks Requirements Minimum 8 years of experience, including 5 years in cloud and DevOps roles Production experience in Linux/Windows system engineering Proficiency with AWS Compute services: EC2, Lambda, Autoscaling, and Load Balancers Competency in AWS Storage services: S3, EFS, EBS, and archival options like Glacier Expertise in AWS Security services: IAM, KMS encryption, and monitoring tools like AWS Config Familiarity with AWS Networking services: VPC setups, VPN usage, and endpoints Hands-on with observability tools such as CloudWatch Alarms, ECS/EKS Monitoring, and VPC Flow Logs Skills in automation with orchestration tools like Terraform or AWS CloudFormation Expertise in container orchestration using Docker and platforms such as Kubernetes on EKS/ECS Flexibility to implement various deployment strategies including in-place modifications and blue-green approaches We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 1 week ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Description Key Roles and responsibilities : Thorough understanding of OCI cloud concepts, environment, services Good Hands-on with OCI architecture and design Implementation of OCI IaaS and PaaS services Conducts business process analysis/design, needs assessments and cost benefit analysis related to the impact on the business. Understanding business needs, translating those needs into system requirements and architecture aligned with the scope of the solution. Technical Skills Hands-on administration skills on the OCI Cloud environment. Good Understanding of OCI cloud, network operations, private and hybrid cloud administration Good Understanding of IaaS, PaaS, SaaS and Cloud design Expertise in designing and planning cloud environment in enterprise environment including application dependencies, client presentation mechanism, network connectivity and overall virtualization strategies Good Understanding of virtualization management and configuration Knowledge of Autoscaling concepts (scale up and scale down) of VM, VM upgrades, configure availability domains/fault domains - Building a technical and security infrastructure in OCI cloud for selected apps/workloads Understanding of OCI Services - VCN, Subnets, Route tables, Dynamic Routing gateway, Service gateway, Security lists, NSG, Load Balancer, Storage buckets, Logging, Auditing, Monitoring, provisioning, security services(Cloud Gaurd, Network Firewall), IAM Understanding and ability to promptly diagnose and remedy cloud related problems and failures Hands-on experience with OCI backend infrastructure, troubleshooting, and root cause analysis Manage server, build commission and decommissions processes Logging & monitoring for IaaS/PaaS resources FastConnect setup and traffic flows experience with IPSEC/VPN tunneling working knowledge Knowledge of VCN peering and managing the Dynamic route gateway, security lists on OCI. Implement and maintain all OCI infrastructure and services VM's, OCI functions, Monitoring, Notifications Experienced in Deploying OCI VM's and managing the cloud workloads through OS management HUB Experienced in implementing DR drills on regular basis as per the need or request Mandatory Skills (Must Have)Primary Skills OCI Certification : Oracle Cloud Infrastructure Architect - Skills at least L2 or L2+ (Good to have) : Knowledge on other Cloud - AWS/Azure Knowledge on Infrastructure as Code (IAC) like Terraform Knowledge of any of the tools like Servicenow, BMC Helix, Ansible, Jenkins, Splunk Cloud automation using Python and Powershell scripts Knowledge on Devops, Kubernetes Behavioral Skill (Must Have) Good Communication Skill - effective written and oral Lead the team of juior architects Eagerness to learn new cloud services and technology Team Collaboration Creative thinking in implementing new solutions (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
India
On-site
*Who you are* You’re the person whose fingertips know the difference between spinning up a GPU cluster and spinning down a stale inference node. You love the “infrastructure behind the magic” of LLMs. You've built CI/CD pipelines that automatically version models, log inference metrics, and alert on drift. You’ve containerized GenAI services in Docker, deployed them on Kubernetes clusters (AKS or EKS), and implemented terraform or ARM to manage infra-as-code. You monitor cloud costs like a hawk, optimize GPU workloads, and sometimes sacrifice cost for performance—but never vice versa. You’re fluent in Python and Bash, can script tests for REST endpoints, and build automated feedback loops for model retraining. You’re comfortable working in Azure — OpenAI, Azure ML, Azure DevOps Pipelines—but are cloud-agnostic enough to cover AWS or GCP if needed. You read MLOps/LLMOps blog posts or arXiv summaries on the weekend and implement improvements on Monday. You think of yourself as a self-driven engineer: no playbooks, no spoon-feeding—just solid automation, reliability, and a hunger to scale GenAI from prototype to production. --- *What you will actually do* You’ll architect and build deployment platforms for internal LLM services: start from containerizing models and building CI/CD pipelines for inference microservices. You’ll write IaC (Terraform or ARM) to spin up clusters, endpoints, GPUs, storage, and logging infrastructure. You’ll integrate Azure OpenAI and Azure ML endpoints, pushing models via pipelines, versioning them, and enabling automatic retraining triggers. You’ll build monitoring and observability around latency, cost, error rates, drift, and prompt health metrics. You’ll optimize deployments—autoscaling, use of spot/gpu nodes, invalidation policies—to balance cost and performance. You’ll set up automated QA pipelines that validate model outputs (e.g. semantic similarity, hallucination detection) before merging. You’ll collaborate with ML, backend, and frontend teams to package components into release-ready backend services. You’ll manage alerts, rollbacks on failure, and ensure 99% uptime. You'll create reusable tooling (CI templates, deployment scripts, infra modules) to make future projects plug-and-play. --- *Skills and knowledge* Strong scripting skills in Python and Bash for automation and pipelines Fluent in Docker, Kubernetes (especially AKS), containerizing LLM workloads Infrastructure-as-code expertise: Terraform (Azure provider) or ARM templates Experience with Azure DevOps or GitHub Actions for CI/CD of models and services Knowledge of Azure OpenAI, Azure ML, or equivalent cloud LLM endpoints Familiar with setting up monitoring: Azure Monitor, Prometheus/Grafana—track latency, errors, drift, costs Cost-optimization tactics: spot nodes, autoscaling, GPU utilization tracking Basic LLM understanding: inference latency/cost, deployment patterns, model versioning Ability to build lightweight QA checks or integrate with QA pipelines Cloud-agnostic awareness—experience with AWS or GCP backup systems Comfortable establishing production-grade Ops pipelines, automating deployments end-to-end Self-starter mentality: no playbooks required, ability to pick up new tools and drive infrastructure independently
Posted 1 week ago
2.0 - 31.0 years
2 - 4 Lacs
Amrapali Dream Valley, Greater Noida
On-site
Key Responsibilities: Diagnose and fix performance bottlenecks across backend services, WebSocket connections, and API response times. Investigate issues related to high memory usage, CPU spikes, and slow query execution. Debug and optimize database queries (PostgreSQL) and ORM (Prisma) performance. Implement and fine-tune connection pooling strategies for PostgreSQL and Redis. Configure and maintain Kafka brokers, producers, and consumers to ensure high throughput. Monitor and debug WebSocket issues like connection drops, latency, and reconnection strategies. Optimize Redis usage and troubleshoot memory leaks or blocking commands. Set up or maintain Prometheus + Grafana for service and infrastructure monitoring. Work on containerized infrastructure using Docker and Kubernetes, including load balancing and scaling services. Collaborate with developers to fix memory leaks, inefficient queries, and slow endpoints. Maintain high availability and fault tolerance across all backend components. 🧠 Requirements: Technical Skills: Strong proficiency in Node.js and TypeScript. Deep knowledge of Prisma ORM and PostgreSQL optimization. Hands-on experience with Redis (pub/sub, caching, memory tuning). Solid understanding of WebSockets performance and reconnection handling. Experience working with Kafka (event streaming, partitions, consumer groups). Familiar with Docker, container lifecycle, and multi-service orchestration. Experience with Kubernetes (deployments, pods, autoscaling, resource limits). Familiar with connection pooling strategies for DB and services. Comfortable with performance monitoring tools like Prometheus, Grafana, UptimeRobot, etc. Soft Skills: Excellent debugging and analytical skills. Able to work independently and solve complex issues. Strong communication and documentation habits. ✅ Preferred Qualifications: 3+ years of experience in backend development. Experience with CI/CD pipelines and production deployments. Prior work with large-scale distributed systems is a plus
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 week ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specialising in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We are looking for a Senior Fullstack Javascript Developer responsible for the client side of our service. Your primary focus will be to implement a complete user interface in the form of a web app, with a focus on performance. Your primary duties will include creating modules and components and coupling them together into a functional app. You will work in a team with the back-end developers and communicate with the API using standard methods. A thorough understanding of all of the components of our platform and infrastructure is required. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 4+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Hands-on experience in JavaScript. Exceptions can be made if you’re really good at any other language with experience in building web/app-based tech products Expertise in Node.JS and Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io (http://socket.io/) Good knowledge of async programming using Callbacks, Promises, and Async/Await Hands-on experience with Frontend codebases using HTML, CSS, and AJAX Working knowledge of MongoDB, Redis, MySQL Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3 Experience with Frontend Stack would be added advantage (HTML, CSS) You might not have experience with all the tools that we use but you can learn those given the guidance and resources Experience in Vue.js would be plus What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 2 weeks ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We're looking for an Senior Engineer to join our Commerce Team. The Commerce Engineering Team forms the backbone of our core business. We build and iterate over our core platform that handles Fynd platform start from onboarding a seller to serving the finished products to end customers across different channels with customisation and configuration. Our team consists of generalist engineers who work on building REST APIs, Internal tools, and Infrastructure. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 4+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Solid experience in Python with experience in building web/app-based tech products Experience in at least one of the following frameworks - Sanic, Django, Flask, Falcon, web2py, Twisted, Tornado Working knowledge of MySQL, MongoDB, Redis, Aerospike Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with core AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3, Elasticache Understanding of Kafka, Docker, Kubernetes Have knowledge of Solr, Elastic search Attention to detail You can dabble in Frontend codebases using HTML, CSS, and Javascript You love doing things efficiently. At Fynd, the work you do will have a disproportionate impact on the business. We believe in systems and processes that let us scale our impact to be larger than ourselves You might not have experience with all the tools that we use but you can learn those given the guidance and resources What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 2 weeks ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specialising in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We are looking for a Fullstack Javascript Developer responsible for the client side of our service. Your primary focus will be to implement a complete user interface in the form of a web app, with a focus on performance. Your primary duties will include creating modules and components and coupling them together into a functional app. You will work in a team with the back-end developers and communicate with the API using standard methods. A thorough understanding of all of the components of our platform and infrastructure is required. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 3+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Hands-on experience in JavaScript. Exceptions can be made if you’re really good at any other language with experience in building web/app-based tech products Expertise in Node.JS and Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io (http://socket.io/) Good knowledge of async programming using Callbacks, Promises, and Async/Await Hands-on experience with Frontend codebases using HTML, CSS, and AJAX Working knowledge of MongoDB, Redis, MySQL Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3 Experience with Frontend Stack would be added advantage (HTML, CSS) You might not have experience with all the tools that we use but you can learn those given the guidance and resources Experience in Vue.js would be plus What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 2 weeks ago
2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We're looking for an Engineer/Senior Engineer to join our Commerce Team. The Commerce Engineering Team forms the backbone of our core business. We build and iterate over our core platform that handles Fynd platform start from onboarding a seller to serving the finished products to end customers across different channels with customisation and configuration. Our team consists of generalist engineers who work on building REST APIs, Internal tools, and Infrastructure. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 2+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Solid experience in Python with experience in building web/app-based tech products Experience in at least one of the following frameworks - Sanic, Django, Flask, Falcon, web2py, Twisted, Tornado Working knowledge of MySQL, MongoDB, Redis, Aerospike Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with core AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3, Elasticache Understanding of Kafka, Docker, Kubernetes Have knowledge of Solr, Elastic search Attention to detail You can dabble in Frontend codebases using HTML, CSS, and Javascript You love doing things efficiently. At Fynd, the work you do will have a disproportionate impact on the business. We believe in systems and processes that let us scale our impact to be larger than ourselves You might not have experience with all the tools that we use but you can learn those given the guidance and resources What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 2 weeks ago
1.0 years
0 Lacs
Kochi, Kerala, India
On-site
A proactive and detail-oriented DevOps Engineer with 1 year of hands-on experience in cloud infrastructure automation, container orchestration, and CI/CD implementation. Strong practical knowledge of Linux environments, cloud-native technologies, and network security. Adept at leveraging tools like Jenkins, Ansible, and Docker to streamline deployment workflows and ensure system reliability. Key Skills & Experience Kubernetes (EKS): Experience deploying, managing, and troubleshooting applications on AWS Elastic Kubernetes Service (EKS). Familiar with Helm, autoscaling, and monitoring within Kubernetes environments. Docker: Proficient in creating, managing, and optimizing Docker containers. Skilled in writing custom Dockerfiles and troubleshooting container issues. Linux & Shell Scripting: Strong expertise in Linux system administration and daily usage of shell commands for automation, monitoring, and system diagnostics. Network Security: Hands-on experience in configuring and managing security groups, firewalls, IAM policies, and ensuring secure communication between services in AWS and Kubernetes environments. Ansible: Experience in writing and executing playbooks for configuration management and automated provisioning of infrastructure. Jenkins: Skilled in designing and managing Jenkins pipelines for CI/CD workflows, integrating with Git, Docker, and AWS services. AWS Cloud EKS: Core competency in managing containerized workloads. EC2 & S3: Provisioning, securing, and managing compute and storage resources. Security Groups & IAM: Implementing secure access policies and managing service-to-service communication. Lambda: Working knowledge of setting up serverless functions for event-driven automation. Mandatory Hands-On Experience Linux systems and advanced shell command usage Jenkins pipeline configuration and maintenance AWS services including EKS, EC2, S3, and Security Groups Network security concepts and practical enforcement A quick learner and effective problem solver, passionate about automation, scalability, and secure DevOps practices. ON-SITE KOCHI-INFOPARK IMMEDIATE JOINER Maximum CTC: ₹3 LPA (Three Lakhs per Annum) Experience Required: Up to 2 years (Candidates with more than 2 years of experience need not apply) SEND YOUR RESUME TO: hrteam@touchworldtechnology.com
Posted 2 weeks ago
5.0 years
0 Lacs
Ahmedabad
On-site
Join our vibrant team at Zymr as a Senior DevOps CI/CD Engineer and become a driving force behind the exciting world of continuous integration and deployment automation. We're a dynamic group dedicated to building a high-quality product while maintaining exceptional speed and efficiency. This is a fantastic opportunity to be part of our rapidly growing team. Job Title : Sr. DevOps Engineer Location: Ahmedabad/Pune Experience: 5 + Years Educational Qualification: UG: BS/MS in Computer Science, or other engineering/technical degree Responsibilities: Deployments to Development, Staging, and Production Take charge of managing deployments to each environment with ease: Skillfully utilize Github protocols to identify and resolve root causes of merge conflicts and version mismatches. Deploy hotfixes promptly by leveraging deployment automation and scripts. Provide guidance and approval for Ruby on Rails (Ruby) scripting performed by junior engineers, ensuring smooth code deployment across various development environments. Review and approve CI/CD scripting pull requests from engineers, offering valuable feedback to enhance code quality. Ensure the smooth operation of each environment on a daily basis, promptly addressing any issues that arise: Leverage Datadog monitoring to maintain a remarkable uptime of 99.999% for each development environment. Develop strategic plans for Bash and Ruby scripting to automate health checks and enable auto-healing mechanisms in the event of errors. Implement effective auto-scaling strategies to handle higher-than-usual traffic on these development environments. Evaluate historical loads and implement autoscaling mechanisms to provide additional resources and computing power, optimizing workload performance. Collaborate with DevOps to plan capacity and monitoring using Datadog. Analyze developer workflows in close collaboration with team leads and attend squad standup meetings, providing valuable suggestions for improvement. Harness the power of Ruby and Bash to create tools that enhance engineers' development workflow. Script infrastructure using Terraform to facilitate the creation infrastructure Leverage CI/CD to add security scanning to code pipelines Develop Bash and Ruby scripts to automate code deployment while incorporating robust security checks for vulnerabilities. Enhance our CI/CD pipeline by building Canary Stages with Circle CI, Github Actions, YAML and Bash scripting. Integrate stress testing mechanisms using Ruby on Rails, Python, and Bash scripting into the pipeline's stages. Look for ways to reduce engineering toil and replace manual processes with automation! Nice to have: Terraform is required Github, AWS tooling, however pipeline outside of AWS, Rails (other scripting languages okay)
Posted 2 weeks ago
5.0 years
0 Lacs
Surat, Gujarat, India
On-site
Position : Technical Lead Location : Surat, Gujarat. (Onsite) ✅ Key Responsibilities 🚀 Architecture & System Design · Define scalable, secure, and modular architectures. · Implement high-availability patterns (circuit breakers, autoscaling, load balancing). · Enforce OWASP best practices, role-based access, and GDPR/PIPL compliance. 💻 Full-Stack Development · Oversee React Native & React.js codebases; mentor on state management (Redux/MobX). · Architect backend services with Node.js/Express; manage real-time layers (WebSocket, Socket.io). · Integrate third-party SDKs (streaming, ads, offerwalls, blockchain). 📈 DevOps & Reliability · Own CI/CD pipelines and Infrastructure-as-Code (Terraform/Kubernetes). · Drive observability (Grafana, Prometheus, ELK); implement SLOs and alerts. · Conduct load testing, capacity planning, and performance optimization. 👥 Team Leadership & Delivery · Mentor 5–10 engineers, lead sprint planning, code reviews, and Agile ceremonies. · Collaborate with cross-functional teams to translate roadmaps into deliverables. · Ensure on-time feature delivery and manage risk logs. 🔍 Innovation & Continuous Improvement · Evaluate emerging tech (e.g., Layer-2 blockchain, edge computing). · Improve development velocity through tools (linters, static analysis) and process optimization. 📌 What You’ll Need · 5+ years in full-stack development, 2+ years in a lead role · Proficient in: React.js, React Native, Node.js, Express, AWS, Kubernetes · Strong grasp of database systems (PostgreSQL, Redis, MongoDB) · Excellent communication and problem-solving skills · Startup or gaming experience a bonus 🎯 Bonus Skills · Blockchain (Solidity, smart contracts), streaming protocols (RTMP/HLS) · Experience with analytics tools (Redshift, Metabase, Looker) · Prior exposure to monetization SDKs (PubScale, AdX)
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France