Jobs
Interviews

37960 Scripting Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

India

Remote

Designation - Sr. Lead - Cloud Security Experience - 8 - 12 years Location - Remote(India) Essential skills: Cloud security framework; Strong scripting skills with PowerShell and experience managing Linux systems. Solid understanding of version control tools, particularly Git. Experience with cloud platforms, including AWS, Azure and GCP. Problem solving and troubleshooting skills. Desired skills: Good communication skills Experience with Docker and container orchestration tools. Knowledge of microservices architecture and related best practices. Summary: Resource must exhibit strong trouble shooting and problem-solving skills along with knowledge of cloud architecture, security features, and cloud platforms such as AWS. Resource must be well-versed with incident management; must have information security auditing experience. Roles & Responsibilities: Security Integration in DevOps Pipelines: ● Embed security tools and practices in CI/CD pipelines to detect and mitigate vulnerabilities. ● Implement static and dynamic code analysis, vulnerability scanning, and container security checks. Infrastructure Security: ● Design and implement secure infrastructure leveraging cloud services and Infrastructure as Code (IaC). ● Ensure configuration management for servers and cloud environments meets security standards. Automation and Monitoring: ● Automate security testing and monitoring processes to maintain compliance and reduce manual intervention. ● Develop and maintain monitoring systems to detect anomalies and security breaches. Collaboration and Training: ● Collaborate with cross-functional teams to address security concerns during software development and deployment. ● Provide training and awareness on secure coding practices and DevSecOps tools. Incident Management: ● Respond to security incidents, conduct root cause analysis, and implement preventive measures. ● Maintain and test incident response plans. Compliance and Governance: ● Ensure systems adhere to regulatory requirements and industry best practices. ● Conduct periodic security audits and assessments to maintain compliance. ● Considering dependencies, relationships, and integration points to ensure proper solution integration with other systems when applicable ● Responsibility for compliance with applicable industry standards, corporate policies and procedures ● Maintaining high-level of client satisfaction ● Leveraging knowledge and experience of technical implementation related to IT Infrastructure Library (ITIL) processes, workflow customization, ticketing, process automation, report development, dashboard creation, and system configurations Essential Experience: ● Solid experience in software development and operations, with a focus on security. ● Strong knowledge of DevOps principles and practices, including CI/CD pipelines, version control systems, and automated testing frameworks. ● Proficiency in scripting and automation using languages such as Python, Ruby, or PowerShell. ● Familiarity with cloud platforms and services (e.g., AWS, Azure, GCP) and their security considerations. ● Experience with containerization technologies (e.g., Docker, Kubernetes) and associated security practices. ● Knowledge of security frameworks and standards (e.g., OWASP, NIST, ISO 27001) and their application in software development. ● Understanding of secure coding practices and common vulnerabilities (e.g., OWASP Top 10) and their mitigation techniques. ● Strong analytical and problem-solving skills, with the ability to identify and address security risks and incidents effectively. Desired Experience: ● Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders. ● Knowledge of microservices architecture and related best practices Certifications, if any: AWS Security, CEH, ISO 27001

Posted 5 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Summary We are seeking a highly skilled and innovative AI Engineer with expertise in both traditional Artificial Intelligence and emerging Generative AI technologies. In this role, you will be responsible for designing, developing, and deploying intelligent systems that leverage machine learning, deep learning, and generative models to solve complex problems. You will work across the AI lifecycle—from data engineering and model development to deployment and monitoring—while also exploring GenAI applications, Agentic AI and developing agentic platforms. The ideal candidate combines strong technical acumen with a passion for experimentation, rapid prototyping, and delivering scalable AI solutions in real-world environments. GE HealthCare is a leading global medical technology and digital solutions innovator. Our purpose is to create a world where healthcare has no limits. Unlock your ambition, turn ideas into world-changing realities, and join an organization where every voice makes a difference, and every difference builds a healthier world. Job Description Job Description Roles And Responsibilities In this role, you will: Develop and fine-tune Generative AI models (e.g., LLMs, diffusion models). Design and implement machine learning models for classification, regression, clustering, and recommendation tasks. Build and maintain scalable AI pipelines for data ingestion, training, evaluation, and deployment. Collaborate with cross-functional teams to understand business needs and translate them into AI solutions. Ensure model performance, fairness, and explainability through rigorous testing and validation. Deploy models to production using MLOps tools and monitor their performance over time. Stay current with the latest research and trends in AI/ML and GenAI and evaluate their applicability to business problems. Document models, experiments, and workflows for reproducibility and knowledge sharing. Technical Skill Set Cloud & Infrastructure (AWS) Amazon SageMaker – Model training, tuning, deployment, and MLOps. Amazon Bedrock – Serverless GenAI model access and orchestration. SageMaker JumpStart – pre-trained models and GenAI templates. Prompt engineering and fine-tuning of LLMs using SageMaker or Bedrock. Programming & Scripting Python – Primary language for AI/ML development, data processing, and automation. Education Qualification Bachelor’s degree in engineering with minimum four years of experience in relevant technologies. Desired Characteristics Technical Expertise GenAI Platforms & Models Familiarity with LLMs: like Claude (Anthropic), LLaMA (Meta), Gemini (Google), Mistral, Falcon Experience with APIs: Amazon Bedrock. Understanding of model types: encoder-decoder, decoder-only, diffusion models Design, develop, and deploy agent-based AI systems that exhibit autonomous decision-making. Integrate Generative AI (LLMs, diffusion models) into real-world applications. Prompt Engineering & Fine-Tuning Prompt design for zero-shot, few-shot, and chain-of-thought reasoning Fine-tuning and parameter-efficient tuning (LoRA, PEFT) Retrieval-Augmented Generation (RAG) design and implementation System Integration & Architecture Event-driven and serverless architectures (e.g., AWS Lambda, EventBridge) Development Frameworks LangChain, LlamaIndex. Vector databases: FAISS, Pinecone, Weaviate, Amazon OpenSearch Langgraph, Langchain Cloud & DevOps AWS (Bedrock, SageMaker, Lambda, S3), Azure (OpenAI, Functions), GCP (Vertex AI) CI/CD pipelines for GenAI workflows Security & Compliance Data privacy and governance (GDPR, HIPAA) Model safety: content filtering, moderation, hallucination control Monitoring & Optimization Model performance tracking (latency, cost, accuracy) Logging and observability (CloudWatch, Prometheus, Grafana) Cost optimization strategies for GenAI inference Collaboration & Business Alignment Working with product, legal, and compliance teams Translating business requirements into GenAI use cases Creating PoCs and scaling to production Business Acumen Demonstrates the initiative to explore alternate technology and approaches to solving problems Skilled in breaking down problems, documenting problem statements and estimating efforts Has the ability to analyze impact of technology choices Skilled in negotiation to align stakeholders and communicate a single synthesized perspective to the scrum team. Balances value propositions for competing stakeholders. Demonstrates knowledge of the competitive environment Demonstrates knowledge of technologies in the market to help make buy vs build recommendations, scope MVPs, and to drive market timing decisions Leadership Influences through others; builds direct and "behind the scenes" support for ideas. Pre-emptively sees downstream consequences and effectively tailors influencing strategy to support a positive outcome. Able to verbalize what is behind decisions and downstream implications. Continuously reflecting on success and failures to improve performance and decision-making. Understands when change is needed. Participates in technical strategy planning. Personal Attributes Able to effectively direct and mentor others in critical thinking skills. Proactively engages with cross-functional teams to resolve issues and design solutions using critical thinking and analysis skills and best practices. Finds important patterns in seemingly unrelated information. Influences and energizes other toward the common vision and goal. Maintains excitement for a process and drives to new directions of meeting the goal even when odds and setbacks render one path impassable. Innovates and integrates new processes and/or technology to significantly add value to GE Healthcare. Identifies how the cost of change weighs against the benefits and advises accordingly. Proactively learns new solutions and processes to address seemingly unanswerable problems. Inclusion and Diversity GE Healthcare is an Equal Opportunity Employer where inclusion matters. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. We expect all employees to live and breathe our behaviors: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership – always with unyielding integrity. Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities. Our salary and benefits are everything you’d expect from an organization with global strength and scale, and you’ll be surrounded by career opportunities in a culture that fosters care, collaboration and support. Additional Information Relocation Assistance Provided: No

Posted 5 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Konovo is a global healthcare intelligence company on a mission to transform research through technology- enabling faster, better, connected insights. Konovo provides healthcare organizations with access to over 2 million healthcare professionals—the largest network of its kind globally. With a workforce of over 200 employees across 5 countries: India, Bosnia and Herzegovina, the United Kingdom, Mexico, and the United States, we collaborate to support some of the most prominent names in healthcare. Our customers include over 300 global pharmaceutical companies, medical device manufacturers, research agencies, and consultancy firms. We are expanding our hybrid Bengaluru team to help our transition from a services-based model toward a scalable product and platform-driven organization. As DevOps Engineer you will support the deployment, automation, and maintenance of our software development process and cloud infrastructure on AWS. In this role you will get hands-on experience collaborating with a global, cross-functional team working to improve healthcare outcomes through market research. We are an established but fast-growing business – powered by innovation, data, and technology. Konovo's capabilities are delivered through our cloud-based platform, enabling customers to collect data from healthcare professionals and transform it into actionable insights using cutting-edge AI combined with proven market research tools and techniques. As DevOps Engineer, you will learn new tools, improve existing systems, and grow your expertise in cloud operations and DevOps practices. What You'll Do Infrastructure automation using Infrastructure as Code tools. Support and improve CI/CD pipelines for application deployment. Work closely with engineering teams to streamline and automate development workflows. Monitor infrastructure performance and help troubleshoot issues. Contribute to team documentation, knowledge sharing, and process improvements. What We're Looking For 3+ years of experience in a DevOps or similar technical role. Familiarity with AWS or another cloud provider. Exposure to CI/CD tools such as GitHub Actions, Jenkins, or GitLab CI. Some experience with scripting languages (e.g., Bash, Python) for automation. Willingness to learn and adapt in a collaborative team environment. Nice to Have (Not Required) Exposure to Infrastructure as Code (e.g., CDK, CloudFormation). Experience with containerization technologies (e.g., Docker, ECS). Awareness of cloud security and monitoring concepts. Database management & query optimization experience. Why Konovo? Lead high-impact projects that shape the future of healthcare technology. Be part of a mission-driven company that is transforming healthcare decision-making. Join a fast-growing global team with career advancement opportunities. Thrive in a hybrid work environment that values collaboration and flexibility. Make a real-world impact by helping healthcare organizations innovate faster. This is just the beginning of what we can achieve together. Join us at Konovo and help shape the future of healthcare technology! Apply now to be part of our journey.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking a highly skilled and experienced DevOps Engineer with 5+ years of hands-on experience in automating, optimizing, and supporting mission-critical deployments in cloud environments. The ideal candidate will have a strong background in CI/CD pipelines, infrastructure as code, containerization, and cloud platforms such as AWS, Azure, or GCP. Key Responsibilities: Design, implement, and maintain scalable CI/CD pipelines. Manage and monitor cloud infrastructure (AWS, Azure, or GCP). Design and implement scalable and secure cloud infrastructure solutions. Manage and optimize cloud resources to ensure high availability and performance. Monitor cloud environments and implement proactive measures for reliability and cost-efficiency. Collaborate with development and operations teams to support cloud-native applications. Ensure compliance with security standards and implement the best practices in cloud security. Troubleshoot and resolve issues related to cloud infrastructure and services. Implement and maintain container orchestration platforms (e.g., Kubernetes, Docker Swarm). Collaborate with development and QA teams to streamline deployment processes. Ensure system reliability, availability, and performance through monitoring and alerting tools (e.g., Prometheus, Grafana, ELK Stack). Maintain security best practices across infrastructure and deployments. Troubleshoot and resolve issues in development, test, and production environments. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 5+ years of experience in DevOps or related roles. Proficiency in scripting languages (e.g., Bash, Python, or Go). Strong experience with CI/CD tools (e.g., Jenkins, GitLab CI, CircleCI). Hands-on experience with containerization and orchestration (Docker, Kubernetes). Experience with infrastructure as code (Terraform, Ansible, or similar). Familiarity with monitoring and logging tools. Excellent problem-solving and communication skills. Preferred Qualifications: Certifications such as AWS Certified DevOps Engineer, CKA/CKAD, or similar. Cloud certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, or Google Cloud Professional Cloud Architect. Experience with serverless architectures and microservices. Familiarity with Agile/Scrum methodologies. What We Offer: Competitive salary and benefits. Opportunities for professional growth and development. A collaborative and innovative work environment.

Posted 5 days ago

Apply

9.0 - 17.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Join our Team About this opportunity: The Data Scientist will support software developers, database architects, data analysts and AI/ML architects on various internal and external data science and AI/ML related initiatives and projects that run in cloud based environments. You are self-directed and comfortable supporting the data analytics, and AIML needs of multiple teams, systems and products. You will also be responsible for integrating the solutions created with the architecture used across the company and its customers. What you will do: 9 - 17 years of Telecom/IT experience in a Data related Role with AI ML and Data Science experience. At least 5 years’ experience using the following software/tools: Python Development for AI/ML and automation Data pipeline development in Elastic (ELK) Stack, Hadoop, Spark, Kafka, etc. Relational SQL and NoSQL databases, for example Postgres and Cassandra. MLOps experience including Model deployment and monitoring AI/ML Solutions for prediction, classification, Natural Language Processing Generative AI technologies like Large Language Models, Agentic AI, RAG and other developing technologies Proficiency in Python programming, with experience in Python libraries commonly used in data engineering and machine learning (e.g., pandas, numpy, scikit-learn) Hands-on experience with Kubernetes and containerization technologies (e.g., Docker). Proficiency in Linux & Shell scripting knowledge What you will Bring: Responsible to work hand in hand with business representatives to design, architect and deliver AI/ML and/or Generative AI solutions considering business ROI in perspective. Create and maintain optimal data pipeline architecture. Identify Data intensive and AL/ML Use cases for different existing Managed service accounts. Prepare technical presentation, design documents and demonstrations for customer presentations on Data strategy and AI/ML Solutions Prepare design, solution documents for AI/ML Solutions including classic AI/ML and Generative AI based solutions. Assemble large, complex data sets that meet functional / non-functional business requirements. Design solutions that will keep data separated and secure across national boundaries through multiple data centers and strategic customers/partners keeping international security standards, organization and customer security requirements in mind. Working knowledge of Generative AI technologies, Large Language Models (LLMs), Agentic AI architecture and implementation tools. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 769881

Posted 5 days ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description We are looking for a Front-End Developer who is motivated to combine the art of design with the art of programming. We need someone who can create complex components from scratch and will be able to modify core functionalities of existing components from any third-party libraries. Responsibilities Build reusable code and libraries for future use. Ensure the technical feasibility of UI/UX designs. Optimize application for maximum speed and scalability. Work with a team of UX designers to ensure any UI design changes help to compliment the overall user experience and journey. Collaborate with other team members and stakeholders. Building and implementing top-notch user interfaces using JavaScript and the React framework. Writing efficient JavaScript code while also using HTML and CSS Providing tech support for clearing bottlenecks and obstacles Skills And Qualifications Bachelor’s/master’s degree in computer science, information technology, or engineering/ or anything similar. At least 3+ years of experience working as an React developer. Basic understanding of server-side CSS pre-processing platforms, such as LESS and SASS. Proficient understanding of client-side scripting and JavaScript frameworks, including JQuery. Good understanding of React.js, Next.js and any design frameworks. Proficient understanding of cross-browser compatibility issues and ways to work around them. A strong portfolio that demonstrates a range of UI design techniques. Excellent knowledge of browser troubleshooting and debugging practices and techniques Sense of ownership and pride in your performance and its impact on company’s success. Mandatory Skills ui, html,React, Next.js

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us PlusWealth Capital Management LLP is a proprietary high-frequency trading firm, active in multiple markets including equities, options, and futures. We thrive on building cutting edge, data-driven, and tech-based trading algorithms. As a dynamic, machine-learning oriented trading platform, we embody the ethos of THINK. TECH. TRADE. If you share our vision, we’d love to have you onboard. Role Overview We are seeking an experienced DevOps Engineer to join our high-frequency trading (HFT) team. In this role, you will be instrumental in managing, optimizing, and scaling the infrastructure that underpins our high-performance trading systems. Your work will focus on large-scale data processing, robust CI/CD pipeline implementation, and automation to achieve seamless, low-latency performance. Collaborating closely with traders and developers, you will ensure our systems operate efficiently and can handle significant data loads with minimal downtime. Core Responsibilities Infrastructure Development: Design, configure, and maintain resilient infrastructure for high-performance trading and back testing systems. System Resilience: Develop and implement backup, disaster recovery, and failover protocols to guarantee continuous operations. CI/CD Pipeline Optimization: Enhance and automate CI/CD pipelines with real-time issue detection and resolution capabilities to support efficient deployments. Tool and Software Management: Manage the deployment, configuration, and updates of both open-source and proprietary applications, ensuring alignment with trading requirements. Troubleshooting and Documentation: Diagnose and resolve issues across hardware, software, and network systems, documenting processes for knowledge sharing and continuous improvement. Automation and Scripting: Automate key processes, leveraging Python, Bash, and Ansible for routine tasks, monitoring, and system health checks, and utilize monitoring tools like Grafana. Containerization and Orchestration : Implement Docker, Kubernetes, and other containerization/orchestration tools to optimize system scalability and reliability. Qualification Criteria Technical Skills and Requirements - Experience: 3-5 years in a DevOps role, ideally in financial service/ Capital Market/ Investment Management or similarly demanding environments. Linux Proficiency: Advanced expertise in Linux systems, including scripting skills (Bash, Python). Configuration Management: Hands-on experience with Ansible, Chef, Puppet, or similar tools for system automation and configuration. Containerization: Proficiency with Docker, Kubernetes, and related container orchestration tools. Version Control & CI/CD: Strong experience with Git, Jenkins, and Nexus; familiarity with Agile methodologies is a plus. Networking & Security: Deep understanding of networking protocols and cybersecurity best practices. Exposure to trading environments and low-latency systems, a distinct advantage. Benefits & Perks: Competitive compensation and performance-based bonuses. Flat organizational structure with high ownership and visibility. Medical insurance – we've got you covered. Catered meals/snacks for 5 working days in office. Generous paid time off policies. Pluswealth Capital Management is an equal opportunity employer

Posted 5 days ago

Apply

10.0 - 15.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About the company: With over 2.5 crore customers, over 5,000 distribution points and nearly 2,000 branches, IndusInd Bank is a universal bank with a widespread banking footprint across the country. IndusInd offers a wide array of products and services for individuals and corporates including microfinance, personal loans, personal and commercial vehicles loans, credit cards, SME loans. Over the years, IndusInd has grown ceaselessly and dynamically, driven by zeal to offer our customers banking services at par with the highest quality standards in the industry. IndusInd is a pioneer in digital first solutions to bring together the power of next-gen digital product stack, customer excellence and trust of an established bank. Job Purpose: To work on implementing data modeling solutions To design data flow and structure to reduce data redundancy and improving data movement among systems defining a data lineage To work in the Azure Data Warehouse To work with large data volume of data integration Experience With overall experience between 10 to 15 years, applicant must have minimum 8 to 11 years of hard core professional experience in data modeling for large Data Warehouse with multiple Sources. Technical Skills Expertise in core skill of data modeling principles/methods including conceptual, logical & physical Data Models Ability to utilize BI tools like Power BI, Tableau, etc to represent insights Experience in translating/mapping relational data models into XML and Schemas Expert knowledge of metadata management, relational & data modeling tools like ER Studio, Erwin or others. Hands-on experience in relational, dimensional and/or analytical experience (using RDBMS, dimensional, NoSQL, ETL and data ingestion protocols). Very strong in SQL queries Expertise in performance tuning of SQL queries. Ability to analyse source system and create Source to Target mapping. Ability to understand the business use case and create data models or joined data in Datawarehouse. Preferred experience in banking domain and experience in building data models/marts for various banking functions. Good to have knowledge of – -Azure powershell scripting or Python scripting for data transformation in ADF - SSIS, SSAS, BI tools like Power BI -Azure PaaS components like Azure Data Factory, Azure Data Bricks, Azure Data Lake, Azure Synapse (DWH), Polybase, ExpressRoute tunneling, etc. -API integration Responsibility Understanding the existing data model, existing data warehouse design, functional domain subject areas of data, documenting the same with as is architecture and proposed one. Understanding existing ETL process, various sources and analyzing, documenting the best approach to design logical data model where required Work with development team to implement the proposed data model into physical data model, build data flows Work with development team to optimize the database structure with best practices applying optimization methods. Analyze, document and implement to re-use of data model for new initiatives. Will interact with stakeholder, Users, other IT teams to understand the eco system and analyze for solutions Work on user requirements and create queries for creating consumption views for users from the existing DW data. Will train and lead a small team of data engineers. Qualifications Bachelors of Computer Science or Equivalent Should have certification done on Data Modeling and Data Analyst. Good to have a certification of Azure Fundamental and Azure Engineer courses (AZ900 or DP200/201) Behavioral Competencies Should have excellent problem-solving and time management skills Strong analytical thinking skills Applicant should have excellent communication skill and process oriented with flexible execution mindset. Strategic Thinking with Research and Development mindset. Clear and demonstrative communication Efficiently identify and solves issues Identify, track and escalate risks in a timely manner Selection Process: Interested Candidates are mandatorily required to apply through the listing on Jigya. Only applications received through Jigya will be evaluated further. Shortlisted candidates may need to appear in an Online Assessment and/or a Technical Screening interview administered by Jigya, on behalf on IndusInd Bank Candidates selected after the screening rounds will be processed further by IndusInd Bank

Posted 5 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join us as we pursue our ground-breaking new vision to make machine data accessible, usable and valuable to everyone. We are a company filled with people who are passionate about our product and seek to deliver the best experience for our customers. At Splunk, we’re committed to our work, customers, having fun and most meaningfully to each other’s success. Learn more about Splunk careers and how you can become a part of our journey! About The Role Be part of a full stack team whose mission is to build a user experience for the leading Observability platform built by Splunk. Splunk is the only company whose products have been named as leaders in both Security and Observability. You will contribute to the backend and frontend code to build and maintain the IT Service Intelligence product at Splunk. This position will need to be physically located in the Hyderabad, India area. Responsibilities Design and develop/implement features from scratch. Handle non-functional requirements like Responsiveness, Performance, High availability etc Regularly leads design and code reviews, and participates in architecture discussions Work with Product Managers to refine the requirements. Collaborate with interaction designers and visual designers to build features with an intuitive user experience Work with Quality team to define the scope for testing Helps teams estimate software deliverables, often across multiple sprint timelines. Work with Architect to refine the technical backlog and define technical debt Take ownership and initiative to own and address issues promptly for our internal and external customers. Become proficient in Splunk's core technologies and processes as they apply to application development. Opportunity to mentor Junior engineers and constantly raise the bar on engineering practices Help team achieve productivity by improving on process Qualifications Must-Have 7+ years of relevant industry experience with a bachelor’s degree; or 5+ years and a master’s degree Good knowledge of web standards and modern browsers, responsive design, and of the full web technology stack. Experience in client-side scripting and JavaScript frameworks (React) A strong foundation in computer science, with strong competencies in operating systems, networks, data structures, algorithms, distributed systems, and software design Excellent problem solving, collaboration, and communication skills, both verbal and written A demonstrated capability for creative thinking, intellectual and entrepreneurial exploration Experience developing, debugging, and performance tuning highly concurrent systems Extensive knowledge and production programming experience in at least one of Java/C++/Python Experience designing and developing REST-based services with well-defined contracts Self-starter who is comfortable taking the lead on a task, collaborating with other engineers to design and implement features Good knowledge with Familiar with the agile software development process Nice to have skills Strong understanding of one of the major Cloud technologies, e.g. AWS, Azure, or Google Cloud Experience in working within a Continuous Delivery (CD) development model We value diversity at our company. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or any other applicable legally protected characteristics in the location in which the candidate is applying.

Posted 5 days ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Hiring for one of the leading Automobile Organisation Job Title: CloudOps Engineer Location: Gurgaon Experience: 6–9 years Department: XOps / Cloud Infrastructure Employment Type: Full-time Job Summary: We are looking for an experienced CloudOps Engineer to join our XOps department to manage and optimize our AWS and Azure cloud infrastructure. The role will also require a good understanding of FinOps to help monitor and control cloud spending effectively. Key Responsibilities: Provision, configure, and maintain cloud environments across AWS and Azure . Manage cloud resources and services to ensure high availability, performance, and scalability. Implement automation for infrastructure provisioning using Terraform , CloudFormation , or ARM templates . Monitor cloud usage and performance; optimize cost and resources. Implement FinOps best practices for cost tracking, budgeting, forecasting, and reporting. Collaborate with teams to recommend and enforce cloud cost-saving strategies. Work with security teams to ensure cloud compliance and secure configurations. Provide support for cloud incidents and troubleshooting. Required Skills: In-depth knowledge and hands-on experience with AWS and Azure cloud services. Proficiency in Terraform , Bicep , or similar IaC tools. Experience with cloud monitoring tools like CloudWatch , Azure Monitor , Datadog , etc. Understanding of FinOps principles and experience using tools like CloudHealth , Cloudability , or AWS Cost Explorer . Familiarity with CI/CD pipelines and DevOps practices. Preferred Qualifications: Certifications like AWS Certified Solutions Architect , Azure Administrator , or FinOps Certified Practitioner . Strong scripting skills (Python, Bash, PowerShell). Experience with multi-cloud environments and hybrid cloud strategies.

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

DXC Luxoft is a leading global software services company, delivering professional services across multiple business verticals such as finance, automotive, and digital transformation. The Automotive practice inside DXC Luxoft delivers software projects to a majority of the world's car manufacturers and suppliers for both personal and commercial vehicles. We are known for our Automotive software expertise and for helping our customers with high quality services and deliveries. Currently we are growing for our Swedish customers and are looking for Automotive scripting engineers with knowledge in build system to join our team. You will be part of our growing organization in Sweden and will be part of developing next generation systems for the Automotive industry in the era of connected, autonomous, electric vehicles. Responsibilities: Support automated build, test, and deployment processes. Collaborate closely with development, testing and stakeholders to ensure seamless workflows and aligned objectives Automating testing procedures to improve software quality and accelerate feedback cycles. Continuously evaluate and improve tools, processes, and workflows to significantly reduce build times and improve test execution speed. Mandatory Skills Description: Automotive domain expertise Python Software testing Autosar testing experience Soft skills: - team players, practical and pragmatic by nature - great communication skills to successfully advise and support the organization Nice-to-Have Skills Description: Team members have experience in working with Jenkins, Gerrit, WSL (Windows Subsystem for Linux), Docker, Artifactory and Linux Team members has insight in all stages of software deployment, especially automated build systems Team members have been part of safety-critical development before and understand the implications from CI/CD perspective Languages: English: C1 Advanced

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

On-site

QuartzBio Overview QuartzBio (www.quartz.bio ) is a Software-as-a-Service (SaaS) solutions provider to the life sciences industry. We deliver innovative, data enabling technologies (i.e., software) that provide biotech/pharma (R&D) teams with enterprise-level access to sample/biomarker data management solutions & analytics, information, insight & reporting capabilities. Our end-to-end (from sample collection to biomarker data) suite of solutions are focused on providing sponsors information (data with context) – we do this by connecting biospecimen, assay as well as clinical data sources in a secure and scalable cloud-based infrastructure, enabling seamless, automated data management workflows, key insight development, improved collaboration, and the ability to make faster, more informed decisions. Position Summary As we continue to expand our software engineering team, we are seeking a highly experienced Software Engineer. You will work with a team of software engineers to design, develop, test and maintain software applications. The successful candidate will have a strong understanding of software architecture, programming concepts and tools, and be able to work independently to solve complex technical problems. In your role as Senior DevOps Engineer You will lead the design and implementation of scalable infrastructure solutions. You’ll mentor junior engineers and drive automation and reliability across our AWS environments. Key Responsibilities Manage projects and initiatives with moderate complexity. Collaborate with cross-functional teams to design, develop, test, and maintain software applications. Create design specifications, test plans and automated test scripts for individual work scope. Develop software solutions that are scalable, maintainable, and secure. Analyze, maintain, and implement (including performance profiling) existing software applications and develop specifications from business requirements. Understand the purpose of new features and help communicate that purpose to team members. Write and debug software systems in accordance with software development standards, including the Application Development Lifecycle. Debug and troubleshoot complex software issues and provide timely solutions. Implement new software features and enhancements. Ensure adherence to software development best practices and processes. Write clean, legible, efficient, and well-documented code. Lead code reviews and provide constructive feedback to peers. Help to support the work of their peers by pair programming, reviewing code, and through mentorship. Mentor junior team members and provide guidance. Continuously improve technical skills and stay up to date with emerging technologies. Communicate effectively with team members and stakeholders. Contribute to strategic planning and decision-making. When performing duties as Senior DevOps Engineer Lead the development and maintenance of the Terraform IaC repository, ensuring modularity and scalability. Design and implement deployment strategies for microservices on Kubernetes (EKS) using Helm. Provision new applications and environments, ensuring consistency across dev, staging, and production. Optimize CI/CD pipelines in GitLab, integrating with Kubernetes and Docker workflows. Manage and monitor Kubernetes clusters, pods, and services. Collaborate with engineering teams to standardize development tools and deployment technologies. Mentor junior engineers and contribute to architectural decisions. Identify opportunities to streamline and automate IaC development processes. Other duties as assigned Qualifications Bachelor’s degree related field and a minimum of 5 years of relevant work experience in cloud/infrastructure technologies, information technology (IT) consulting/support, systems administration, network operations, software development/support, technology solutions. 2-4 years of experience working in a customer-facing role and leading projects. Excellent problem-solving and analytical skills. Strong written and verbal communication skills. Ability to articulate ideas and write clear and concise reports. Role qualifications: For Senior DevOps Engineer 5+ years of DevOps experience. Deep expertise in AWS services and Terraform. Strong scripting and automation skills. Experience with container orchestration (EKS, Kubernetes) and Helm Charts. Experience with CI/CD tools, specifically GitLab. Leadership Expectations Follows Company's Principles and code of ethics on a day-to-day basis. Shows appreciation for individual talents, differences, and abilities of fellow team members. Listens and responds with appropriate actions. Supports change initiatives and continuous process improvements. Any data provided as a part of this application will be stored in accordance with our Privacy Policy. For CA applicants, please also refer to our CA Privacy Notice. Precision Medicine Group is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, age, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status or other characteristics protected by law. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process or are limited in the ability or unable to access or use this online application process and need an alternative method for applying, you may contact Precision Medicine Group at QuestionForHR@precisionmedicinegrp.com. It has come to our attention that some individuals or organizations are reaching out to job seekers and posing as potential employers presenting enticing employment offers. We want to emphasize that these offers are not associated with our company and may be fraudulent in nature. Please note that our organization will not extend a job offer without prior communication with our recruiting team, hiring managers and a formal interview process.

Posted 5 days ago

Apply

10.0 years

0 Lacs

India

Remote

Job Title: Cloud Architect Location: India (remote) Experience Required: 10+ years (with 5+ years in cloud architecture) About the Role: We are seeking a highly experienced Cloud Architect to lead the design, implementation, and optimization of robust cloud infrastructure solutions for enterprise clients. This role plays a strategic part in defining cloud adoption strategies, architecting secure and scalable solutions, and ensuring high performance, cost-efficiency, and security across multi-cloud environments. Key Responsibilities Design and implement scalable, secure, and cost-effective cloud architectures tailored to enterprise requirements Develop and manage cloud adoption strategies , including modernization and migration roadmaps Evaluate and recommend appropriate cloud services and technologies to meet business needs Implement and enforce cloud governance, security, and compliance best practices Develop and oversee disaster recovery and business continuity plans Optimize cloud resource usage and cost performance across platforms Collaborate with cross-functional teams including development, security, and network engineering Lead architecture reviews and propose technical enhancements Mentor junior architects and cloud engineers to support team growth Maintain clear and thorough documentation for cloud systems, designs, and processes Stay current with emerging trends and technologies in cloud computing Required Skills & Qualifications 10+ years of professional experience in IT, with at least 5 years focused on cloud architecture and implementation Deep expertise in major cloud platforms: Azure (preferred) , AWS, or Google Cloud Platform Strong understanding of cloud networking , including virtual networks (VNets), load balancers, and application gateways In-depth knowledge of cloud security , including RBACs, NSGs, encryption protocols, Private Links, Private Endpoints, Managed Identities, and Service Principals Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform , ARM templates , and Helm charts Proficiency in scripting languages such as Python, PowerShell, or Bash Strong experience with containerization and orchestration technologies (Docker, Kubernetes) Experience in cloud cost optimization and performance tuning Relevant certifications such as: Microsoft Certified: Azure Solutions Architect Expert AWS Certified Solutions Architect – Professional Google Cloud Certified – Professional Cloud Architect

Posted 5 days ago

Apply

7.0 years

0 Lacs

India

Remote

About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: Senior QA Automation Experience: 7+ Years Key Requirements: Proficient in Java Hands-on experience with Selenium Solid experience with Rest Assured and Postman for API testing Strong logic building and problem-solving abilities Excellent automation scripting skills for both UI and API testing

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

Don't see exactly the role you're looking for? No problem! At Sumo Logic, we're always on the lookout for talented professionals to join our team. By submitting your application here, you are expressing interest in potential engineering roles that may become available in the future. Why Apply Now? At Sumo Logic, we believe the strongest teams are built before the hiring starts. If you're passionate about customer advocacy, problem-solving, and delivering world-class support—even if you're not actively job hunting—we’d love to connect. By submitting your profile, you’ll be among the first we reach out to for upcoming openings in our Customer Support, Customer Success, or Renewal Specialist teams. This is your opportunity to stay top-of-mind as we grow our customer experience organization in India. Let’s shape the future of customer-centric innovation—together. Join the Frontlines of Customer Success at Sumo Logic At Sumo Logic, our mission is to make the digital world faster, more reliable, and secure . Our AI-powered SaaS analytics platform empowers global organizations to monitor, secure, and optimize their cloud-native systems. And behind that platform is a team of passionate support specialists and customer champions dedicated to helping our customers succeed. Whether you're solving deep technical issues, managing renewal cycles, or proactively guiding customers toward value, you’ll play a critical role in building loyalty, trust, and long-term impact. Our Customer Support and Success teams are recognized as some of the most technically adept and customer-obsessed teams in the industry—delivering real results across Dev, Sec, and Ops functions. Areas of Focus We Regularly Hire For Roles Such As Customer Success Specialist / Renewal Specialist – Driving retention, managing renewals, and identifying expansion opportunities in a SaaS environment Technical Support Engineer / Senior Technical Support Engineer – Providing in-depth troubleshooting, incident resolution, and technical guidance on log analytics, observability, and cloud platforms What We Value Experience in SaaS, subscription-based platforms, or technical support roles Strong communication, empathy, and problem-solving skills Familiarity with Salesforce, Gainsight, Zuora, Clari, or similar CRM/CS tools Technical acumen across logging systems, cloud platforms (AWS/GCP/Azure), SIEM, scripting, or observability tools Comfort with night shifts (US hours) and working independently in a fast-paced environment Curiosity and eagerness to learn new tools and technologies Tools & Tech You Might Work With CRM & CS Platforms: Salesforce, Gainsight, Clari, Zuora Observability & Monitoring: Sumo Logic, Splunk, DataDog, Elastic Cloud Providers: AWS, GCP, Azure Scripting/Debugging: Python, Bash, SQL, PowerShell Systems & Networking: TCP/IP, syslog, Docker, Kubernetes Ready to Stay on Our Radar? Submit your application today to express interest in future opportunities. We’ll keep your profile handy and reach out when a role opens that matches your skills and aspirations. About Us Let’s transform the customer experience together. Sumo Logic, Inc., empowers the people who power modern, digital business. Sumo Logic enables customers to deliver reliable and secure cloud-native applications through its SaaS analytics platform. The Sumo Logic Continuous Intelligence Platform™ helps practitioners and developers ensure application reliability, secure, and protect against modern security threats, and gain insights into their cloud infrastructures. Customers worldwide rely on Sumo Logic to get powerful real-time analytics and insights across observability and security solutions for their cloud-native applications. For more information, visit www.sumologic.com. Sumo Logic Privacy Policy. Employees will be responsible for complying with applicable federal privacy laws and regulations, as well as organizational policies related to data protection. The expected annual base salary range is unavailable for this posting as your application will be considered for several types and levels of positions. Compensation varies based on a variety of factors which include (but aren’t limited to) such as role level, skills and competencies, qualifications, knowledge, location, and experience. In addition to base pay, certain roles are eligible to participate in our bonus or commission plans, as well as our benefits offerings, and equity awards.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction In the IBM Chief Information Office, you will be part of a dynamic team driving the future of AI and data science in large-scale enterprise transformations. We offer a collaborative environment where your technical expertise will be valued, and your professional development will be supported. Join us to work on challenging projects, leverage the latest technologies, and make a tangible impact on leading organisations. Your Role And Responsibilities As a QA (Quality Assurance) /Test Developer you will be designing better ways to identify potential weak spots, inefficiencies, and issues within software systems. This position will work closely with development teams and other test engineers in the implementation and delivery of AI Applications that meet rigorous quality standards, budgets, and timelines. The role seeks good levels of personal organization, and the ability to work well with a distributed global team in a fast paced and exciting environment. You will play a key role by delivering quality functions for development teams within test driven framework. Your scope will include Test Plan Development, Test Case Execution, Automation Testing, Data Creation, API Validation and incorporating test automation in the CI/CD pipelines. Your Primary Responsibilities Include Script/Language Proficiency: Possess knowledge with any scripting language, automation framework like Selenium. API Testing and Automation Familiarity: Hands-on experience in API testing and API automation. Agile Development Methodologies: Familiarity with agile development methodologies." Preferred Education Master's Degree Required Technical And Professional Expertise 5+ years of experience. Testing Proficiency : Strong in Functional/Integration/Regression testing concepts with UI and experience in Automation testing using Java/Selenium framework. API Testing : Hands on experience Experience with tuning test cases and UI automation for testing. Excellent Problem-Solving Skills: Demonstrated experience in problem-solving, with the ability to tackle complex issues and find effective solutions. Collaborative Development: Work closely with development teams to identify potential weak spots, inefficiencies, and issues within software systems, fostering a collaborative approach to software quality. Strong knowledge in manual Software Testing concepts. Preferred Technical And Professional Experience Experince in Java, Python or Go languages. Familiarity with cloud deployments like IBM Cloud, AWS, Azure Familiarity with Red Hat OpenShift or virtualisation. Exposure to AI/ML is plus

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

On-site

Job Summary: Senior Engineer - Sports Platform Engineering Services Contract Terms: Permanent THE TEAM You will be working as part of a cross functional platform team to deliver infrastructure, platform and database services for the Sports ticketing product(s) in our international markets. partnering with product and software engineering teams to ensure alignment on achieving business goals for the international Sports product suite. THE JOB Ticketmaster Sport is part of Live Nation Entertainment which is the world’s leading live entertainment company comprised of global market leaders: Ticketmaster, Live Nation Concerts, LN Media and Artist Nation Management. You will consult on and help to implement solutions as part of a Product Delivery team. Enabling teams to deliver software faster by creation of tooling and automation, providing operational support for a range of products, including the delivery of ticketing and associated solutions for major tournaments and high-profile sports clubs and events. Because our business is online 24/7, you will be required to work out of hours and provide on-call duty on a rota basis. WHAT YOU WILL BE DOING Supporting and maintaining a hybrid Windows and Linux infrastructure , ensuring stability, performance, and operational efficiency. Partnering with product engineering teams to bring new features and platform components into production, contributing to a seamless deployment pipeline. Automating recurring tasks, deployments, and testing workflows using infrastructure-as-code and scripting tools to improve consistency and speed. Designing and implementing highly available, fault-tolerant systems that meet performance and scalability demands. Planning, organizing, and clearly communicating project progress and outcomes to stakeholders and team members. Driving continuous improvements in system architecture, security , and operational processes to meet PCI compliance and internal standards. Diagnosing and resolving complex issues across the full technology stack, from infrastructure to application level. Conducting regular infrastructure audits, identifying gaps, and maintaining a well-prioritized backlog of improvements and enhancements. WHAT YOU NEED TO KNOW (or TECHNICAL SKILLS) Solid 5+ years of hands-on experience working with public cloud platforms , particularly AWS , including infrastructure provisioning and cloud-native services. Proficient in scripting languages such as PowerShell, Bash, and Python , with a strong focus on automation and operational efficiency. Skilled in using configuration management tools like Ansible, Chef, or Octopus Deploy to streamline and standardize infrastructure deployments. Strong knowledge of Windows and Linux server configuration and administration , with the ability to support hybrid environments. Experience managing and maintaining Active Directory , including integration with cloud services and identity platforms. Familiar with network storage technologies , such as NetApp and Amazon S3 , including setup, management, and optimization. Proven experience with virtualization platforms , including VMware, Hyper-V, and XEN , supporting scalable, resilient systems. Practical experience provisioning infrastructure using Terraform or CloudFormation , following infrastructure-as-code principles. Working knowledge of secrets and service discovery tools , such as Vault and Consul , ensuring secure and reliable platform operations. Comfortable working with and integrating RESTful APIs , enabling automation, observability, and platform extensibility. YOU (BEHAVIOURAL SKILLS) Applies advanced troubleshooting skills to proactively resolve issues and minimize operational disruption. Demonstrates strong analytical thinking and a solution-oriented mindset, regularly identifying opportunities for improvement and innovation. Uses sound judgment to select appropriate methods, tools, and approaches for solving complex technical challenges. Actively contributes to the design and architecture of systems, ensuring alignment with business goals and technical strategy. Regularly reviews performance, security, and quality metrics , identifying trends and taking action to maintain operational excellence. Embraces new ideas with an open and adaptive mindset , actively seeking opportunities to experiment, learn, and grow. Shares and applies proven solutions and best practices from across teams to drive consistency and efficiency across the organization. Consistently delivers work to a high standard , demonstrating ownership, precision, and a commitment to continuous improvement. LIFE AT TICKETMASTER We are proud to be a part of Live Nation Entertainment, the world’s largest live entertainment company. Our vision at Ticketmaster is to connect people around the world to the live events they love. As the world’s largest ticket marketplace and the leading global provider of enterprise tools and services for the live entertainment business, we are uniquely positioned to successfully deliver on that vision. We do it all with an intense passion for Live and an inspiring and diverse culture driven by accessible leaders, attentive managers, and enthusiastic teams. If you’re passionate about live entertainment like we are, and you want to work at a company dedicated to helping millions of fans experience it, we want to hear from you. Our work is guided by our values: Reliability - We understand that fans and clients rely on us to power their live event experiences, and we rely on each other to make it happen. Teamwork - We believe individual achievement pales in comparison to the level of success that can be achieved by a team Integrity - We are committed to the highest moral and ethical standards on behalf of the countless partners and stakeholders we represent Belonging - We are committed to building a culture in which all people can be their authentic selves, have an equal voice and opportunities to thrive EQUAL OPPORTUNITIES We are passionate and committed to our people and go beyond the rhetoric of diversity and inclusion. You will be working in an inclusive environment and be encouraged to bring your whole self to work. We will do all that we can to help you successfully balance your work and homelife. As a growing business we will encourage you to develop your professional and personal aspirations, enjoy new experiences, and learn from the talented people you will be working with. It's talent that matters to us and we encourage applications from people irrespective of their gender, race, sexual orientation, religion, age, disability status or caring responsibilities. #LI-AK

Posted 5 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

About The Company Tata Communications Redefines Connectivity with Innovation and IntelligenceDriving the next level of intelligence powered by Cloud, Mobility, Internet of Things, Collaboration, Security, Media services and Network services, we at Tata Communications are envisaging a New World of Communications Job Description Responsible for different aspects of engineering activities to provide differentiated services and solutions. These may also include product evaluation, solution designing and testing and roll out plan for existing and new services, including design of tools needed for operation of these new services and systems. This is an operational role, responsible for delivering results that have direct impact on day-to-day engineering operations in the respective areas of MS-SQL Database administration, Patching, Life Cycle Management. Responsibilities Install, configure, upgrade, and patch SQL Server instances (On-premises and/or Cloud – Azure SQL) : 2017, 2019, 2022 Perform MS-SQL patching, installation of SQL Server Service Packs. Upgrade databases from across different MS-SQL DB versions. Collect data, Analyse and prepare capacity planning, sizing, and database growth projections. Schedule DB backups to ensure Restore/recovery without data loss across all environments Configure, support replication across multiple DB setups. Disaster recovery solution at the remote site for the production databases using Log Shipping. Implement and maintain High Availability (HA) and Disaster Recovery (DR) solutions such as AlwaysOn Availability Groups and Replication Setup Database Maintenance Plans to reorganize indexes, re-indexing and update the index statistics on the production databases. Use System monitor to find the bottlenecks in CPU, Disk I/O and memory devices and improved the database server performance. Use SQL Server Profiler to monitor and record database activities of users and applications. Use DBCC commands to troubleshoot issues related to database consistency. Use Index tuning wizard to redesign and created the indexes for better performance. Fine-tune Stored Procedures using Execution Plan in T-SQL for better performance. Streamline the server/database level security by creating the Windows and SQL logins with the appropriate server/db roles and object level permissions. Use Data Transformation Services (DTS)/SQL Server Integration Services (SSIS) an Extract Transform Loading (ETL) tool of SQL Server to populate data from various data sources, creating packages for different data loading operations for application. Implement trigger and stored procedures and enforced business rules via checks and constraints. Review DB Scripts from the development team before releasing them into the Production. Well versed with Active/Passive HA setup Automation of L1/L2 MS-SQL DB tasks Performance Tuning & Monitoring and Database Hardening Desired Skill sets Powershell scripting, Python Linux Basic MYSQL, Postgress, Oracle (Knowledge of other DB :Good to have)

Posted 5 days ago

Apply

5.0 years

9 - 12 Lacs

Tada

On-site

DevOps Engineer Job Location:Station-S2880 Central Expressway, Sri City, AP, 517646, India Joining Time : Immediate to 30 days Key Responsibilities: Design, build, and maintain scalable and secure infrastructure (on AWS/Azure/GCP). Develop and maintain CI/CD pipelines for automated testing and deployment. Implement Infrastructure as Code using tools like Terraform, Cloud Formation, or Ansible. Monitor system performance and troubleshoot infrastructure issues. Maintain and improve security, backup, and disaster recovery procedures. Collaborate with development teams to optimize application performance and release cycles. Automate routine tasks to increase system efficiency. Manage containerized applications using Docker and orchestration tools like Kubernetes. Required Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field. 5 to 9 years Overall Exp and 3+ years of experience in a DevOps or Site Reliability Engineering (SRE) role. Strong experience with cloud platforms (AWS, Azure, or GCP). Proficient in scripting languages like Bash, Python, or Shell. Experience with CI/CD tools (e.g., Jenkins, GitLab CI/CD, CircleCI). Hands-on experience with Docker and container orchestration (Kubernetes, ECS, etc.). Familiarity with monitoring tools (Prometheus, Grafana, ELK, or CloudWatch). Job Type: Full-time Pay: ₹900,000.00 - ₹1,200,000.00 per year Work Location: In person

Posted 5 days ago

Apply

8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Role Description Role: Oracle PL/SQL Developer Experience Required: 8+ Years Role Summary We are seeking an experienced Oracle PL/SQL Developer with strong expertise in database development and Unix scripting. The ideal candidate should have extensive experience in writing complex PL/SQL code, stored procedures, triggers, and working with large datasets. Familiarity with ETL tools and domain knowledge in Capital Markets or the Banking industry is a plus. Must-Have Skills Oracle PL/SQL: 8+ years of hands-on experience Expertise in developing stored procedures, triggers, packages, and performance tuning Strong SQL query optimization skills Unix/Linux Scripting: Proficient in writing shell scripts for automation and integration ETL Tools: Working knowledge of any ETL tool (e.g., Informatica, DataStage, Talend, etc.) Strong Communication Skills: Ability to clearly communicate technical concepts to technical and non-technical stakeholders Good-to-Have Skills Exposure to or experience in Capital Markets or Banking domain Understanding of data integration, data warehousing, or reporting systems Key Responsibilities Design, develop, and optimize PL/SQL procedures, functions, and packages Write and maintain Unix shell scripts for data processing and system automation Collaborate with cross-functional teams including ETL developers, business analysts, and QA Participate in code reviews and ensure best practices for database development Work closely with stakeholders to understand requirements and deliver high-quality solutions Troubleshoot and resolve performance issues in SQL and PL/SQL Educational Qualification Bachelor’s or Master’s degree in Computer Science, Engineering, or related field Skills Oracle PL/SQL,Unix,Scripting

Posted 5 days ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description Job Location: Chennai or Mumbai or Gurgaon High-Value Professional Experience And Skills Cloud Migrations & Architecture Proven leadership in designing and executing application/infrastructure migration projects (on-prem to cloud). Expert in cloud architecture design and implementation to solve complex business problems and achieve team goals. Strong familiarity with Microsoft Azure; AWS and GCP experience also valued. DevOps & Automation Experience building and managing automated CI/CD pipelines (Preferred: GitHub Actions; Also considered: Jenkins, Argo CD). Proficient in Infrastructure-as-Code (IaC) (Preferred: Terraform; Also considered: Ansible, Puppet, ARM templates). Expertise in managing containerized workloads (Preferred: AKS & Helm; Also considered: EKS, other Kubernetes distributions, Docker, JFrog). Skilled in serverless computing (e.g., Logic Apps, Azure/AWS Functions, WebJobs, Lambda). Monitoring, Security & Analytics Proficient in logging and monitoring tools (e.g., Fluentd, Prometheus, Grafana, Azure Monitor, Log Analytics). Strong understanding of cloud-native network security (e.g., Azure Policy, AD/RBAC, ACLs, NSG rules, private endpoints). Exposure to big data analytics platforms (e.g., Databricks, Synapse). Other Desirable Professional Experience And Skills Strong technologist with the ability to advise on cloud best practices. Skilled in multi-component system integration and troubleshooting. Budgeting and cost optimization experience in cloud environments. Experience in performance analysis and application tuning. Expertise in secrets management (Preferred: HashiCorp Vault; Also: Azure Key Vault, AWS Secrets Manager). Familiarity with Kubernetes service meshes (Preferred: Linkerd; Also: Istio, Traefik mesh). Scripting and coding proficiency in various environments: (e.g., Bash/Sh, PowerShell, Python, Java). Familiar with tools like Jira, Confluence, Azure Storage Explorer, MySQL Workbench, Maven. Basic Professional Experience And Skills Solid understanding of SDLC, change control processes, and related procedures. Hands-on experience with source control and code repository tools (e.g., Git/GitHub/GitLab, VS Code, SVN). Ability to articulate and present technical solutions to both technical and non-technical audiences. Required Education And Professional Experience 5+ years of overall professional IT experience. 2+ years of hands-on experience in DevOps/Site Reliability Engineering (SRE) roles on major cloud platforms. Bachelor’s degree in Engineering, Computer Science, or IT; advanced degrees preferred. Industry certifications in Cloud Architecture or Development (Preferred: Azure; Also: AWS, GCP). Skills Any Cloud experience (Azure/AWS) Terraform, Kubernetes, Any CI/CD tool Security and code quality tools: WiZ, Snyk, Qualys, Mend, Checkmarx, Dependabot, etc. (Experience with any of these tools) Ashwini P, Recruitment Manager TEKSALT|A Pinch of Us Makes All the Difference [An E-Verified & WOSB Certified Company] Healthcare | Pharma | Manufacturing | Insurance | Financial | Retail Mobile: +91- 9945165022 | email: ashwini.p@teksalt.com www.teksalt.com Secret management: Hashicorp vault / Akeyless

Posted 5 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. We are looking for a skilled Security Engineer with a role focused on detecting and responding to threats against Microsoft’s environment. This role is part of Microsoft’s CDO – Cyber Defense Operations. About CDO - Cyber Defense Operations. An organization led by Microsoft’s Chief Information Security Officer enables Microsoft to deliver the most trusted devices and services. CDO’s vision is to ensure all information and services are protected, secured, and available for appropriate use through innovation and a robust risk framework. The Security Engineer will support incident response and conduct forensic investigations to identify, contain, and resolve security threats. Implement countermeasures to address evolving risks and ensure the resilience of enterprise systems. Collaborate with cross-functional teams to drive platform hardening, enforce security maintenance protocols, and execute vulnerability remediation procedures effectively. Your expertise will play a pivotal role in protecting Microsoft’s operations, ensuring compliance, and strengthening our security posture. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day. Responsibilities Monitor, detect, and respond to security incidents with a focus on phishing threats using tools like Microsoft Defender for Office and Azure Sentinel. Conduct in-depth incident analysis and forensic investigations to determine root causes and escalate valid threats. Analyze and interpret data from SIEM consoles, correlating events across multiple sources to identify potential threats. Investigate and respond to social engineering attempts, contributing to awareness training and user education. Utilize scripting skills (KQL, PowerShell, Python, shell scripting) to automate detection and response workflows. Work with DLP, AV, FIM, web/email proxies, and other security tools to ensure comprehensive protection across the organization. Collaborate with cross-functional teams to improve security posture and support compliance initiatives. Document incidents, procedures, and playbooks with clarity and precision. Stay current with evolving threat landscapes and contribute to continuous improvement of SOC processes. Qualifications Atleast 3 years in SOC roles focusing on identifying security vulnerabilities, conducting forensics, social engineering, threat modelling, and security architecture. Hands on experience with incident analysis. Understanding of Windows internals Understanding Linux and Mac OS. Understanding of various attack methods, vulnerabilities, exploits, malware. Good Understanding of SIEM Console. Social engineering - given that humans are the weakest link in the security chain, an analyst's expertise can help with awareness training Security assessments of network infrastructure, hosts and applications - another element of risk management Forensics - investigation and analysis of how and why a breach or other compromise occurred Troubleshooting - the skill to recognize the cause of a problem DLP, AV, FIM, web proxy, email proxy, etc. - a comprehensive understanding of the tools utilized to protect the organization. Scripting knowledge in KQL, PowerShell, Python, general batch/shell scripting. Excellent written and verbal communication skills. Security certifications such as Network++, CySA+, or CCNA are highly desirable. Experience with Azure Sentinel and Microsoft Defender for Office is a plus. Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: This position will be required to pass the Microsoft background check upon hire/transfer and every two years thereafter. Preferred Qualifications Leadership, empathy, interpersonal and communication skills 3+ years' experience in identifying security vulnerabilities, software development lifecycle, large-scale computing, modeling, cyber security, and anomaly detection TCP/IP, Firewall, computer networking, routing and switching - an understanding of the fundamentals: the language, protocol and functioning of the internet MS degree in Computer Science, Risk Management, Cyber Security, or related field OR equivale AI Experience In Security Will Be Preferred Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 5 days ago

Apply

2.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Department: Information Technology Location: Indore Job Type: Full-Time Job Summary: We are seeking a skilled IT Automation Specialist to streamline and optimize our IT operations through effective automation solutions. The ideal candidate will be responsible for designing, implementing, and maintaining automation scripts, tools, and frameworks that enhance productivity, reduce manual intervention, and support scalability across IT systems and services. Key Responsibilities: Analyze current IT workflows, processes, and systems to identify opportunities for automation. Develop, test, and deploy automation scripts and tools using technologies like PowerShell, Python, Ansible, Terraform, or Bash. Implement and manage CI/CD pipelines to support DevOps practices. Automate infrastructure provisioning, software deployment, configuration management, and monitoring processes. Collaborate with cross-functional teams (DevOps, Network, Security, and Development) to understand automation needs. Maintain and update documentation related to automated processes and workflows. Monitor system performance and provide proactive solutions for system improvement. Ensure all automation complies with security standards, change management, and best practices. Troubleshoot issues related to automated systems and resolve in a timely manner. Stay current with industry trends and emerging technologies in IT automation and DevOps. Qualifications and Skills required: Bachelor's degree in Computer Science, Information Technology, or a related field. 2+ years of experience in IT operations or system administration with a focus on automation. Proficiency in scripting languages: PowerShell, Python, or Bash. Experience with DevOps tools: Jenkins, Git, Docker, Kubernetes, Ansible, Terraform, etc. Familiarity with cloud platforms like AWS, Azure, or Google Cloud. Strong understanding of networking, server administration, and system security practices. Excellent problem-solving skills and attention to detail. Preferred: Certifications such as AWS Certified DevOps Engineer, Microsoft Certified: Azure Administrator, or Red Hat Certified Engineer (RHCE). Experience with ServiceNow or other ITSM tools for workflow automation. Knowledge of Agile/Scrum methodology. Soft Skills: Strong analytical and communication skills. Ability to work independently and as part of a team. Time management and prioritization capabilities. Willingness to learn and adapt in a fast-paced environment.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Data Engineer Must have 5+ years of experience in below mentioned skills. Must Have: Big Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora. Data Engineer Must have 5+ years of experience in below mentioned skills. Must Have: Big Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora. Data Engineer Must have 5+ years of experience in below mentioned skills. Must Have: Big Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora.

Posted 5 days ago

Apply

7.0 years

20 - 24 Lacs

Kerala, India

Remote

Lead Data Engineer skills- Python, AWS, and SQL . Experience- 7+years (Relevant 5+yrs) Budget : 23LPA Location : Kochi/TVM/Remote Notice period : Immediate We are seeking a seasoned Lead Data Engineer with over 7+ years of experience, primarily focused on Python and writing complex SQL queries in PostgreSQL. The ideal candidate will have a strong background in Python scripting, with additional knowledge of AWS services such as Lambda, Step Functions, and other data engineering tools. Experience in integrating data into Salesforce is a plus. Job Description / Duties And Responsibilities The role requires a skilled Lead Data Engineer with extensive experience in developing and managing complex data solutions using PostgreSQL and Python. Ability to work as an individual contributor as well as the lead of a team, delivering end-to-end data solutions. Ability to troubleshoot and resolve data-related issues efficiently. Ensure data security and compliance with relevant regulations. Ability to lead a small team (2 – 3 data engineers) Maintain comprehensive documentation of data solutions, workflows, and processes. Proactively communicate with the customer and team to provide guidance aligned with both short-term and long-term customer needs. Experience working on at least 3 to 4 end-to-end data engineering projects. Adherence to company policies, procedures, and relevant regulations in all duties. Job Specification / Skills and Competencies The candidate should have deep expertise in writing and optimizing complex SQL queries in PostgreSQL. Proficiency in Python scripting for data manipulation and automation. Familiarity with AWS services like Lambda, Step Functions. Knowledge on building semantic data layers between applications and backend databases. Experience in integrating data into Salesforce and understanding of its data architecture is a valuable asset. Strong troubleshooting and debugging skills. Ability to build and test data processes integrating with external applications using REST API and SOAP calls. Knowledge of real-time data processing and integration techniques. Strong communication skills to interact with customers and team members effectively. To adhere to the Information Security Management policies and procedures. Skills: soap,data,python,sql,rest api,postgresql,salesforce,aws

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies