Jobs
Interviews

4864 Shell Scripting Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

10 - 20 Lacs

Hyderabad

Work from Office

Role & responsibilities Role: DevOps Engineer - GCP Location: Hyderabad Job type: Permanent Work mode: Onsite Note: Looking Immediate Joiners/15 days only. JD: Must have: Strong DevOps Engineer with 5+ Relevant experience.- Experience in writing codes on Java/Python. Python- Strong experience with Shell / Bash Scripting. Proficiency with Jenkins / TeamCity / Github actions, creating and managing CICD pipelines. Hands-on experience in any Major Cloud application configuration, deployment, support, and migration GCP is a plus. Solid experience using configuration management frameworks (e.g., Ansible/Chef/Puppet) Mandatory Skills : DevOps, Java/Python, (Shell Scripting/ Bash Scripting), (Jenkins / TeamCity), Terraform , CICD Looking for DevOps resources who have experience working in Development projects. To apply, please share me your resume on "vipul@vishusa.com".

Posted 1 week ago

Apply

3.0 - 6.0 years

4 - 7 Lacs

Pune

Work from Office

What you'll do: Experience with Salesforce platform production deployments working in an agile methodology environment. Experience in CI/CD Tools (Bitbucket, Git, Sourcetree, Bitbucket, Jenkins, Copado, Flosum). Ensure all Salesforce updates made seamlessly move from development through staging to production. Environment Management Tools, Salesforce Dev Sandbox provisioning and deployments. Experience maintaining customer/partner master data.Proven experience with business process optimization Thorough understanding of salesforce system limits to build at scale. Drive resolution of issues related to the Salesforce.com platform. Remain current on Salesforce.com best practices and technologies. Understanding of Salesforce backup tools Excellent verbal and written communication Excellent interpersonal and collaboration skills Positive attitude and team player mentality What you'll bring: Salesforce.com Certification Thorough knowledge of Salesforce security framework profiles, permission sets, OWD, role hierarchy, sharing settings etc. Experience in working on workflow, process builder, approval, lightning flow, report & dashboards configurations. Understanding of Apex, Visualforce, and JavaScript capabilities Good Knowledge of any scripting language like PowerShell, Python, Bash/Shell scripting will be preferred.

Posted 1 week ago

Apply

3.0 - 7.0 years

8 - 12 Lacs

Pune

Work from Office

Cloud Administrator (AWS Operations) JOB_DESCRIPTION.SHARE.HTML CAROUSEL_PARAGRAPH JOB_DESCRIPTION.SHARE.HTML Pune, India India Enterprise IT - 22705 about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Cloud Administrator / AWS We seek a professional IT administrator to join our Pune, India office. This role is responsible for deploying and administering acloud-based computing platformwithin ZS. What Youll Do? ? Participate inAWS deployment, configuration and optimization to provide a cloud-based platform to address business problems across multiple client engagements; Leverage information from requirements-gathering phase and utilize past experience to implement a flexible and scalable solution; Collaborate with other team members (involved in the requirements gathering, testing, roll-out and operations phases) to ensure seamless transitions; Operatescalable, highly available, and fault tolerant systems on AWS; Controlthe flow of data to and from AWS; Use appropriateAWS operational best practices; Configureand understandVPC network and associated nuances for AWS infrastructure; Configureand understandAWS IAM policies; Configuresecured AWS infrastructure; User and access management for different AWS services. What Youll Bring? ? Bachelor's Degree in CS, IT or EE 1-2 years of experience in AWS Administration, Application deployment and configuration management Good knowledge of AWS services (EC2,S3,IAM,VPC,Lambda,EMR,CloudFront,Elastic Load Balancer etc.) Basic knowledge of CI/CD tools like JetBrains TeamCity, SVN,BitBucket etc. Good knowledge of Windows and Linux operating system administration. Basic knowledge of RDBMS and database technologies like SQL server, PostgreSQL. Basic knowledge of web server software like IIS or Apache Tomcat Experience in any scripting knowledge like PowerShell, Python, Bash/Shell scripting Knowledge of DevOps methodology Strong verbal, written and team presentation communication skills Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At

Posted 1 week ago

Apply

1.0 - 6.0 years

8 - 13 Lacs

Pune

Work from Office

Cloud Observability Administrator JOB_DESCRIPTION.SHARE.HTML CAROUSEL_PARAGRAPH JOB_DESCRIPTION.SHARE.HTML Pune, India India Enterprise IT - 22685 about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Cloud Observability Administrator ZS is looking for a Cloud Observability Administrator to join our team in Pune. As a Cloud Observability Administrator, you will be working on configuration of various Observability tools and create solutions to address business problems across multiple client engagements. You will leverage information from requirements-gathering phase and utilize past experience to design a flexible and scalable solution; Collaborate with other team members (involved in the requirements gathering, testing, roll-out and operations phases) to ensure seamless transitions. What Youll Do: Deploying, managing, and operating scalable, highly available, and fault tolerant Splunk architecture. Onboarding various kinds of log sources like Windows/Linux/Firewalls/Network into Splunk. Developing alerts, dashboards and reports in Splunk. Writing complex SPL queries. Managing and administering a distributed Splunk architecture. Very good knowledge on configuration files used in Splunk for data ingestion and field extraction. Perform regular upgrades of Splunk and relevant Apps/add-ons. Possess a comprehensive understanding of AWS infrastructure, including EC2, EKS, VPC, CloudTrail, Lambda etc. Automation of manual tasks using Shell/PowerShell scripting. Knowledge of Python scripting is a plus. Good knowledge of Linux commands to manage administration of servers. What Youll Bring: 1+ years of experience in Splunk Development & Administration, Bachelor's Degree in CS, EE, or related discipline Strong analytic, problem solving, and programming ability 1-1.5 years of relevant consulting-industry experience working on medium-large scale technology solution delivery engagements; Strong verbal, written and team presentation communication skills Strong verbal and written communication skills with ability to articulate results and issues to internal and client teams Proven ability to work creatively and analytically in a problem-solving environment Ability to work within a virtual global team environment and contribute to the overall timely delivery of multiple projects Knowledge on Observability tools such as Cribl, Datadog, Pagerduty is a plus. Knowledge on AWS Prometheus and Grafana is a plus. Knowledge on APM concepts is a plus. Knowledge on Linux/Python scripting is a plus. Splunk Certification is a plus. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At

Posted 1 week ago

Apply

3.0 - 5.0 years

8 - 13 Lacs

Noida, Gurugram, Bengaluru

Work from Office

The T C A practice has experienced significant growth in demand for engineering & architecture roles from CST, driven by client needs that extend beyond traditional data & analytics architecture skills. There is an increasing emphasis on deep technical skills like such as strong expertise in Azure, Snowflake, Azure OpenAI, and Snowflake Cortex, along with a solid understanding of their respective functionalities. Individual will work on a robust pipeline of T C A-driven projects with pharma clients . This role offers significant opportunities for progression within the practice. What Youll Do Opportunity to work on high-impact projects with leading clients. Exposure to complex and technological initiatives Learning support through organization sponsored trainings & certifications Collaborative and growth-oriented team culture. Clear progression path within the practice. Opportunity work on latest technologies Successful delivery of client projects and continuous learning mindset certifications in newer areas Contribution to partner with project leads and AEEC leads to deliver complex projects & grow T C A practice. Development of expert tech solutions for client needs with p ositive feedback from clients and team members. What Youll Bring 3-5 years experience bringing technology acumen with an architectural bend to advance practice assets & client value delivery. Experience in pharma or life sciences dataFamiliarity with pharmaceutical datasets, including product, patient, or healthcare provider data, is a plus. Need strong expertise in AWS, Python, Pyspark , SQL, Azure (azure functions/ azure data factory / azure API services/ Azure DevOps) , Shell Scripting with d eep understanding of engineering principles, ability to solve complex problems, and guide the team in technical decision-making, while staying updated with industry advancements with s trong engineering skills. Desirable to have a reasonable domain experience or subject matter expertise that goes beyond typical technology skills, preferably Pharma and life sciences S hould be able to work in an offshore onshore setting, c ollaborating with multiple teams including client and ZS stakeholders across multiple time zones , exposure to managing a small team of junior developers, QC their deliverables and coach as per the need Excellent communication and client engagement skills Desirable to have few industry recognized certifications from accredited vendors like AWS, GCP, Databricks, Azure Snowflake Additional Skills: Bachelors or masters degree in computer science, Information Technology, or related field. 3 4 years of relevant industry experience

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 14 Lacs

Hyderabad

Work from Office

•BE/B.Tech in ECE /M.Tech in VLSI with 6 to 9 years experience in Analog Mixed Signal Verification •Very Good experience in Verilog AMS, Verilog-A, WREAL, modeling of Analog blocks •Very Good experience in Analog Mixed Signal verification simulation tools •Good experience in System Verilog, UVM methodologies •Able to train the team members and guide them to the solutions for problems •Good experience in creating the AMS Verification environment and able to create AMS Verification environment from scratch. •Good experience in Gate level netlist simulation •Experience in Python, Perl, Shell scripting is added advantage. •Good communication and documentation skills

Posted 1 week ago

Apply

3.0 - 6.0 years

2 - 6 Lacs

Mumbai

Work from Office

Deploy, configure, and manage AWS cloud infrastructure (EC2, VPC, S3, RDS, IAM, CloudWatch, ELB, etc.) Set up and maintain CI/CD pipelines using Jenkins, GitLab CI, or GitHub Actions Implement Infrastructure as Code (IaC) using Terraform or AWS

Posted 1 week ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Mumbai

Work from Office

About Atos Atos is a global leader in digital transformation with c 78,000 employees and annual revenue of c ?10 billion European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients Atos is a SE (Societas Europaea) and listed on Euronext Paris, The purpose of Atos is to help design the future of the information space Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space, Responsibilities Design and implement data obfuscation strategies using Thales CipherTrust Tokenization, FPE, and Data Masking modules, Define and build reusable obfuscation templates and policies based on data classification, sensitivity, and business use cases, Apply obfuscation to PII, PCI, and other regulated data across databases, applications, files, APIs, and cloud services, Install, configure, and administer CipherTrust Manager, DSM, and related connectors, Develop and deploy integration patterns with enterprise systems ( e-g , Oracle, SQL Server, Kafka, Snowflake, Salesforce, AWS, Azure), Automate policy deployment and secret rotation using APIs, CLI, or scripting tools ( e-g , Ansible, Terraform, Python, Shell), Work with or integrate secondary data protection tools ( e-g DLP, IRM, MIP etc) Design, deploy, and manage PKI systems, including root and subordinate certificate authorities (CAs), Manage digital certificate lifecycles for users, devices, and services, Ensure secure storage and access controls for encryption/decryption and signing keys, Enable cross-platform key management and policy consistency for hybrid and multi-cloud environments, Align obfuscation patterns with internal data protection standards, classification schemes, and regulatory frameworks (GDPR, CCPA, DORA, SEBI, etc), Provide obfuscation logs and audit evidence to support security assessments, audits, and compliance reviews, Implement monitoring and alerting for obfuscation control failures, anomalies, or unauthorized access attempts, Create detailed technical documentation, SOPs, and configuration guides, Train internal engineering teams and application owners on how to securely integrate with obfuscation services, Collaborate with data governance, security architecture, and DevSecOps teams to drive secure-by-design initiatives Knowledge, Skill, Experience Required Required 36 years of hands-on experience with Thales CipherTrust Data Security Platform (Tokenization, DSM, FPE, Masking), Strong knowledge of data protection concepts tokenization (deterministic and random), pseudonymization, static/dynamic masking, and encryption Experience integrating obfuscation solutions with databases (Oracle, SQL, PostgreSQL, etc), enterprise apps, and data pipelines, Proficiency in scripting and automation tools Python, Shell, REST APIs, Ansible, CI/CD pipelines, Familiarity with key management, HSM integration, and data access policies, Beneficial Thales Certified Engineer / Architect CipherTrust CISSP, CISA, CDPSE, or CIPT will be a binus Cloud Security Certification ( e-g , AWS Security Specialty, Azure SC-300) Personal Characteristics Strong analytical and problem-solving mindset Ability to work independently in a fast-paced, global enterprise environment Excellent documentation and communication skills Comfortable collaborating with cross-functional teams (App Dev, Security, Compliance, Data Governance) Experience supporting enterprise data security transformations and data-centric protection strategies Show

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Noida, Bengaluru

Work from Office

Description: We’re looking for a seasoned DevOps Engineer who thrives in a fast-paced, highly collaborative environment and can help us scale and optimize our infrastructure. You’ll play a critical role in building robust CI/CD pipelines, automating cloud operations, and ensuring high availability across our services. If you’re AWS certified, hands-on with Chef, and passionate about modern DevOps practices, we want to hear from you. Requirements: • 8–12 years of hands-on DevOps/Infrastructure Engineering experience. • Proven expertise in Chef for configuration management. • AWS Certified Solutions Architect – Associate or Professional (Required). • Strong scripting skills in Python, Shell, YAML, or Unix scripting. • In-depth experience with Terraform for infrastructure as code (IAC). • Docker and Kubernetes production-grade implementation experience. • Deep understanding of CI/CD processes in microservices environments. • Solid knowledge of monitoring and logging frameworks (e.g., ELK, Prometheus). Job Responsibilities: • Design and implement scalable CI/CD pipelines using modern DevOps tools and microservices architecture. • Automate infrastructure provisioning and configuration using Terraform, Chef, and CloudFormation (if applicable). • Work closely with development teams to streamline build, test, and deployment processes. • Manage and monitor infrastructure using tools like ELK Stack, Prometheus, Grafana, and New Relic. • Maintain and scale Docker/Kubernetes environments for high-availability applications. • Support cloud-native architectures with Lambda, Step Functions, and DynamoDB. • Ensure secure, compliant, and efficient cloud operations within AWS. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 1 week ago

Apply

10.0 - 14.0 years

20 - 30 Lacs

Noida, Pune, Bengaluru

Hybrid

Greetings from Infogain! We are having Immediate requirement for Big Data Engineer (Lead) position in Infogain India Pvt ltd. As a Big Data Engineer (Lead), you will be responsible for leading a team of big data engineers. You will work closely with clients and team members to understand their requirements and develop architectures that meet their needs. You will also be responsible for providing technical leadership and guidance to your team. Mode of Hiring-Permanent Skills : (Azure OR AWS) AND Apache Spark OR Hive OR Hadoop AND Spark Streaming OR Apache Flink OR Kafka AND NoSQL AND Shell OR Python. Exp: 10 to 14 years Location: Bangalore/Noida/Gurgaon/Pune/Mumbai/Kochi Notice period- Early joiner Educational Qualification: BE/BTech/MCA/M.tech Working Experience 12-15 years of broad experience of working with Enterprise IT applications in cloud platform and big data environments. Competencies & Personal Traits Work as a team player Excellent problem analysis skills Experience with at least one Cloud Infra provider (Azure/AWS) Experience in building data pipelines using batch processing with Apache Spark (Spark SQL, Dataframe API) or Hive query language (HQL) Experience in building streaming data pipeline using Apache Spark Structured Streaming or Apache Flink on Kafka & Delta Lake Knowledge of NOSQL databases. Good to have experience in Cosmos DB, Restful APIs and GraphQL Knowledge of Big data ETL processing tools, Data modelling and Data mapping. Experience with Hive and Hadoop file formats (Avro / Parquet / ORC) Basic knowledge of scripting (shell / bash) Experience of working with multiple data sources including relational databases (SQL Server / Oracle / DB2 / Netezza), NoSQL / document databases, flat files Basic understanding of CI CD tools such as Jenkins, JIRA, Bitbucket, Artifactory, Bamboo and Azure Dev-ops. Basic understanding of DevOps practices using Git version control Ability to debug, fine tune and optimize large scale data processing jobs Can share CV @ arti.sharma@infogain.com Total Exp Experience- Relevant Experience in Big data Relevant Exp in AWS OR Azure Cloud- Current CTC- Exp CTC- Current location - Ok for Bangalore location-

Posted 1 week ago

Apply

1.0 - 4.0 years

6 - 10 Lacs

Chennai

Work from Office

Description Support Engineer in DEP leads their Supporting engineering team in identifying opportunities to innovate based on tickets that the team receives They mentor and coach junior support engineers They assist their manager in maintaining the productivity levels of their team through automation and refinement of processes Some of the key job functions Provide support of incoming tickets, including extensive troubleshooting tasks, with responsibilities covering multiple products, features and services Work on operations and maintenance driven coding projects, primarily in Java, python, or shell scripts, and AWS technologies Software deployment support in staging and production environments Develop tools to aid operations and maintenance System and Support status reporting Ownership of support for one or more payment products or components Customer notification and workflow coordination and follow-up to maintain service level agreements Work with dev team for handing-off or taking over active support issues and creating a team specific knowledge base and skill set Basic Qualifications 2+ years of software development, or 2+ years of technical support experience Experience troubleshooting and debugging technical systems Experience in Unix Experience scripting in modern program languages Experience in cloud computing, preferably in AWS Preferred Qualifications Knowledge of web services, distributed systems, and web application development Experience troubleshooting & maintaining hardware & software RAID Experience with REST web services, XML, JSON Our inclusive culture empowers Amazonians to deliver the best results for our customers If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, amazon jobs / content / en / how-we-hire / accommodations for more information If the country/region youre applying in isnt listed, please contact your Recruiting Partner, Company ADCI MAA 15 SEZ Job ID: A3034121 Show

Posted 1 week ago

Apply

4.0 - 6.0 years

4 - 7 Lacs

Chennai

Work from Office

Job Summary : We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. DevOps Engineer responsibilities include deploying product updates, identifying production issues and implementing integrations that meet customer needs. If you have a solid background in software engineering and are familiar with Python, wed like to meet you. It will be your responsibility to execute and automate operational processes fast, accurately and securely. Job Requirements : Working experience in Docker and Kubernetes. Experience in tools like Sonar, Appscan, Owasp, Nexus etc. with Jenkins integration. Experience in any one Cloud (AWS/Azure/GCP). Scripting: Shell/bash/Python scripting. Working with continuous integration (CI) Tools: Jenkins. Maintain services once they are live by measuring and monitoring availability, latency and overall system health. Support the application CI/CD pipeline for promoting software into higher environments through validation and operational gating, and lead DevOps automation and best practices. Follow/maintain an agile methodology for delivering on project milestones. Excellent oral, presentation, and written communication skills Preferred Qualification : Bachelors degree in Computer Science, Information Technology with 5+ years of equivalent experience. Minimum of 3 years of DevOps experience setting up CI/CD pipelines for web applications in the Cloud. Working knowledge of databases and SQL. Good understanding and knowledge of Containers, and Serverless ecosystems. Requires in-depth knowledge of the software development life cycle, logging, monitoring, and alerting. Proven implementation of creative technology solutions that advance the business.

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

Hyderabad

Work from Office

8+ years of experience in DevOps engineering , with at least 2 years focused on OpenText platforms Experience with containerization ( Docker , Kubernetes ) and infrastructure as code ( Terraform , Cloud Formation ) Provident fund

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Role Overview: Understand the product/module requirement and come up with required test plans and scripts for automated testing. Interaction with design teams and working towards resolving bugs, ensuring that the product meets the quality and usability expectations About the role : Understand the product/module requirement and come up with required test plans and scripts for automated testing. Interaction with design teams and working towards resolving bugs, ensuring that the product meets the quality and usability expectations Reviewing product documentation and providing feedback Compliance with latest testing methodologies to deliver additional functionality with improved quality Drive the usability and management aspects with customer requirements in mind Participating in Design Reviews and Code Inspections Hands-on authoring of test cases and test code combined with test execution Responsible for designing and developing automated tools or frameworks for highly optimized and effective test coverage. Ensure testability in product features and measure code coverage data regularly Utilize innovative test technologies to develop products testing strategy Perform debugging and troubleshooting in a variety of local and remote testing environments as well as in the field as required. About you : At least 6+ years of experience in product testing and development, including experience of code review / bug analysis, development of test tools, designing test cases and contribution to effective test planning. A proven track record of shipping high quality, scalable software Should have debugging experience, and excellent problem solving skills. Demonstrated ability to work effectively both within a team and cross-group to drive identification andresolution ofissues to ship under tight deadlines along with being able to drive features into the product. Must be highly motivated with a strong passion for and commitment to software quality Competencies / Skill sets required for the role: Good understanding of Quality Process/Test Lifecycle and defect lifecycle Good understanding of the overall development process including but not limited to Agile/SCRUM. Knowledge & hands-on Angular, Python/Shell/Perl scripting experience Knowledge of bug tracking systems like Bugzilla, JIRA etc. Understanding & hands-on experience of at least one of configuration management tools such as Git, SVN, Perforce, ClearCase Knowledge of RESTful architecture & usage of REST API is highly desirable. Understanding of Cloud Computing, Virtualization & experience with AWS/Azure is strongly preferred Experience of using Test Management Tools such as TestRail, X-Ray etc. Good communication skills and ability to work well with others Self-motivated high energy person with attention to details Ability to work with various stakeholders and resolve issues independently

Posted 1 week ago

Apply

3.0 - 5.0 years

6 - 10 Lacs

Bengaluru

Work from Office

We also recognize the importance of closing the 4-million-person cybersecurity talent gap. We aim to create a home for anyone seeking a meaningful future in cybersecurity and look for candidates across industries to join us in soulful work. More at . Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. Be part of the vision to ship top-class EDR solutions for On-Prem, Cloud or hybrid Customers. Job Description About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas. Execute new feature and regression cases manually, as needed for a product release. Identify critical issues and communicate them effectively in a timely manner. Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job. Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency. Hands-on with automation programming languages such asnPython, Java, etc. is needed.Execute, monitor and debug automation runs Author automation code to improve coverage across the board. Willing to explore and increase understanding on On-prem infrastructure. About you: 3-5 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required. Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty. Familiarity with Unix/Linux (preferably Redhat variants). Proficiency in Python, PyTest, Behave, Robot Framework, Selenium, Bash/Shell Scripting. CI/CD pipeline integration (Jenkins, GitLab CI, GitHub Actions) Tools: JMeter, Gatling, Locust, or custom scripts to simulate high-volume telemetry data. Ability to work with security engineers, DevOps, and developers to define test criteria. Creating clear test plans, bug reports, and reproducibility steps. The following are good-to-have: Familiarity with parsing/ingesting data formats like JSON, Syslog etc Familiarity with Virtualization technologies (e.g., Vagrant, VirtualBox). Familiarity with Cloud environment's like: AWS/GCP. Understanding of container technologies (docker, docker compose etc) Ability to design/test search queries, dashboards, and alerting (OpenSearch Dashboards/Kibana). Experience validating cluster health, scalability, and performance under load. Experience with on-prem environments (networking, firewalls, hardware constraints) Experience with tools like Prometheus, Grafana, or ELK/OpenSearch for monitoring pipelines.

Posted 1 week ago

Apply

10.0 - 20.0 years

5 - 15 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Job Description: Key Responsibilities: Design and manage AWS infrastructure using CloudFormation and Terraform. Develop and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI. Automate deployment processes and infrastructure provisioning. Monitor and troubleshoot cloud environments to ensure optimal performance. Collaborate with development and operations teams to streamline workflows. Implement security best practices and compliance standards. Document infrastructure setups, processes, and configurations. Required Skills & Experience: 10-15 years of experience in DevOps roles with a focus on AWS. Strong expertise in AWS services (EC2, ECS, Lambda, VPC, IAM, etc.). Hands-on experience with CloudFormation and Terraform for IaC. Proficiency in scripting languages like Python, Bash, or PowerShell. Experience with monitoring tools (CloudWatch, Prometheus, etc.). Familiarity with version control systems (Git) and agile methodologies. Excellent problem-solving and communication skills. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certification. Experience in regulated industries (e.g., finance, healthcare). Knowledge of containerization (Docker, Kubernetes).

Posted 1 week ago

Apply

3.0 - 5.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. Be part of the vision to ship top-class EDR solutions for On-Prem, Cloud or hybrid Customers. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is needed. Execute, monitor and debug automation runs Author automation code to improve coverage across the board Willing to explore and increase understanding on On-prem infrastructure About you: 3-5 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty. Familiarity with Unix/Linux (preferably Redhat variants) Proficiency in Python, PyTest, Behave, Robot Framework, Selenium, Bash/Shell Scripting. CI/CD pipeline integration (Jenkins, GitLab CI, GitHub Actions) ToolsJMeter, Gatling, Locust, or custom scripts to simulate high-volume telemetry data . Ability to work with security engineers, DevOps, and developers to define test criteria. Creating clear test plans, bug reports, and reproducibility steps. The following are good-to-have Familiarity with parsing/ingesting data formats like JSON, Syslog etc Familiarity with Virtualization technologies (e.g., Vagrant, VirtualBox). Familiarity with Cloud environment's likeAWS/GCP. Understanding of container technologies (docker, docker compose etc) Ability to design/test search queries, dashboards, and alerting (OpenSearch Dashboards/Kibana). Experience validating cluster health, scalability, and performance under load. Experience with on-prem environments (networking, firewalls, hardware constraints) Experience with tools like Prometheus, Grafana, or ELK/OpenSearch for monitoring pipelines. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each other's unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement

Posted 1 week ago

Apply

4.0 - 7.0 years

10 - 14 Lacs

Bengaluru

Work from Office

We are seeking a skilled Golang Developer with 4+ years of hands-on software development experience. The ideal candidate will possess strong Go programming capabilities, deep knowledge of Linux internals, and experience working with service-oriented and microservice architectures. Key Responsibilities: 4+ years of Software development experience Good Go implementation capabilities. Understanding of different design principles. Good understanding of Linux OS - memory, instruction processing, filesystem, system daemons etc. Fluent with linux command line and shell scripting. Working knowledge of servers (nginx, apache, etc.), proxy-servers, and load balancing. Understanding of service based architecture and microservices. Knowledge of AV codecs, MpegTS and adaptive streaming like Dash, HLS. Good understanding of computer networking concepts. Working knowledge of relational Databases Good analytical and debugging skills. Knowledge of git or any other source code management Good to Have Skills: Working knowledge of Core Java and Python are preferred. Exposure to cloud computing is preferred. Exposure to API or video streaming performance testing is preferred. Preferred experience in Elasticsearch and Kibana (ELK Stack) Proficiency in at least one modern web front-end development framework such as React JS will be a bonus Preferred experience with messaging systems like RabbitMQ

Posted 1 week ago

Apply

7.0 - 10.0 years

22 - 30 Lacs

Bengaluru

Work from Office

Skills SIEM tools (Splunk), SentinalOne, CASB tool (NetSkope), DLP OWASP, CWE, SANS, NISTGoogle, Microsoft, AWS scripting languages like Python, PowerShell security certifications (Security+, CEH, ECIH, GCIH Wireshark and packet sniffing tools (Java, Shell, JavaScript, Python threat analysis python cloud security software siem tools information security event log analysis adaptability siem planning scripting securitypeople management skill system java team work gcp leadership splunk logging aws programming communication skills architecture Education BE/B.Tech/MCA/M.Sc./M.Tech in Computer Science or related discipline Year of Experience: Minimum7 to 10 years of experience in the security domain with exposure to Security Products About the Team & Role: Position Overview: We are seeking a highly experienced and proactive Information Security Manager to lead our security initiatives. This role requires deep expertise in threat analysis, SIEM tools (Splunk, SentinelOne), and major security frameworks (OWASP, NIST). The ideal candidate will be responsible for identifying and mitigating technical risks, enhancing security tools, preparing intelligence reports, and providing technical leadership to a team. Candidates should have a minimum of 10 years in the security domain, strong experience with cloud security (Google, Microsoft, AWS), scripting (Python, PowerShell), and security event log analysis. Excellent communication and problem-solving skills are essential. Preferred qualifications include SIEM and vulnerability management experience, relevant security certifications (Security+, CEH, GCIH), and a Bachelor's degree in a related field. What will you get to do here? Initial point of contact for client requirements and operational escalation Proactively identify technical and architectural risks, and work effectively to mitigate them Research, plan, and implement new tool features to make security tools more effective and add value Prepare and present Security Intelligence Reports Provide technical direction to Associates and Analysts within the team Assist in investigations of high-level, complex violations of information security policies Report security performance against established security metrics Provide deep subject matter expertise in architecture, policy, and operational processes for threat analysis and client escalation Provide guidance and support to 3rd-level technical support, including architecture review, rules and policy review/tuning Establish and communicate extent of threats, business impacts, and advise on containment and remediation Collaborate with other BUs on security gaps and educate teams on cybersecurity importance Manage platforms and vendors What qualities are we looking for? Minimum 10 years of experience in the security domain with exposure to Security Products Experience with methodologies and tools for threat analysis of complex systems, such as threat modeling SME knowledge of SIEM tools (Splunk), SentinalOne, CASB tool (NetSkope), DLP, etc. Understanding of major security frameworks (OWASP, CWE, SANS, NIST, etc.) SME-level knowledge of the current threat landscape Experience securing applications deployed on cloud platforms (Google, Microsoft, AWS) Knowledge and experience with scripting languages like Python, PowerShell Experience with security operations program development Proficiency with security event log analysis and various event logging systems Excellent verbal and written communication skills Ability to learn and retain new skills in a changing technical environment Willingness to learn new technology platforms SIEM experience and Vulnerability Management Recognized network and security certifications (Security+, CEH, ECIH, GCIH, etc.) Experience with Wireshark and packet sniffing tools Python development experience Bachelor's degree in Computer Science, Engineering, or a related field Strong proficiency in programming languages (Java, Shell, JavaScript, Python) Excellent problem-solving skills and attention to detail Strong communication and teamwork abilities Expertise with privacy software

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 8 Lacs

Noida

Work from Office

Job Description - SSE for C/C++ Senior Engineer - C/C++ Excellent communication skills are must. Good Analytical skills so that he/she can understand the business. Minimum Experience - 3 Yrs, Maximum Experience – 5 Yrs. Mandatory Skills Pro*C Writing make file Memory managements Data Structures & Algorithms Shell scripts CUNITs Oracle PL/SQL Experience of Software Engineering Process Experience in Design Familiarity with CMMI process Awareness of various Reviews and Best practices, adherence to Quality Guidelines, DevOps Awareness of Efforts Estimation, iterative scheduling Team handling and mentoring Desirable Perl UML Chip manufacturing domain experience Experience of Knowledge acquisition on existing applications from other teams. Experience in Application which is in Management & Maintenance [Defects, Change Requests]. Total Experience Expected: 02-04 years

Posted 1 week ago

Apply

9.0 - 13.0 years

15 - 30 Lacs

Chennai

Work from Office

At Visteon, the work we do is both relevant and recognized not just by our organization, but by our peers, by industry-leading brands, and by millions of drivers around the world. Thats YOUR work. And, as a truly global technology leader in the mobility space, focused on building cross-functional AND cross-cultural teams, we connect you with people who help you grow. So here, whatever we do is not a job. Its a mission. As a multi-billion-dollar leader of disruptive change in the industry, we are shaping the future, while enabling a cleaner environment. No other industry offers more fast-paced change and opportunity. We are in the midst of a mobility revolution that will completely change the way we interact with our vehicles, reduce the number of car accidents and fatalities, and make the world a cleaner place. Visteon is at the epicenter of this mobility revolution. Two major trends in the automotive industry the shift to electric vehicles and vehicles with autonomous safety technologies have created unique opportunities for Visteon. We are the only automotive provider focused exclusively on cockpit electronics the fastest-growing segment in the industry. And our team is ready for YOU. To show the world what you can do. Detailed description: We seek a skilled GitLab Engineer with over 10+ years of experience in Git and GitLab/GitHub administration, specifically focusing on On-Prem services. The ideal candidate will possess extensive expertise in resolving Git issues, configuring CI/CD pipelines, and integrating development workflows with IDEs. This role demands proficiency in GitLab CI/CD, strong troubleshooting abilities, and excellent communication skills. Job Title: GitLab Specialist/Architect Role Overview: This role demands a seasoned professional to design, implement, and maintain a robust GitLab ecosystem. You'll architect enterprise-level solutions, ensuring high availability, security, and performance. You'll guide teams in adopting best practices for Git workflows, CI/CD pipelines, and infrastructure as code. You will lead the design and implementation of security hardening for GitLab environments. You are expected to be a subject matter expert, providing advanced troubleshooting and performance tuning. You will drive automation and integration strategies for Git-related tasks. You will develop and enforce Git governance policies and standards. You will also create and deliver training programs on GitLab best practices. You will be responsible for defining and implementing disaster recovery and business continuity plans for Git services. You will evaluate and recommend emerging Git technologies. You are expected to design and maintain scalable and highly performant Git architectures. You will lead security audits and risk assessments. You will implement advanced monitoring and performance-tuning strategies. You will manage and optimize complex CI/CD pipelines. You will develop and maintain comprehensive documentation. You will mentor and guide junior engineers and developers. You will integrate GitLab with other enterprise systems. You are expected to have a deep understanding of containerization and orchestration. You are expected to have experience with Google Repo tools or similar tools. Qualifications: • Bachelor's degree in Computer Science or related field (Master's preferred). • Minimum 10+ years of experience in Git administration and architecture. • Expertise in GitLab administration, CI/CD, and security best practices. • Proficiency in scripting languages (Python, Bash). • Strong Linux system administration and troubleshooting skills. • Experience with containerization (Docker, Kubernetes) and IaC. • Excellent communication and collaboration skills. • Experience with security hardening. • Experience with performance tuning. • Experience with large-scale Git environments. Desired Behaviors: • Proactive problem-solving and critical thinking. • Strong leadership and mentorship capabilities. • Commitment to continuous learning and improvement. • Ability to work effectively in a fast-paced environment. • Strong understanding of security best practices. Key Responsibilities: Git Specialist/Architect 1. Git Mastery & Strategic Vision: o Demonstrate mastery of Git internals, advanced version control strategies, and complex merge/rebase scenarios. o Develop and communicate the organization's Git strategy, aligning with business goals and industry best practices. 2. Advanced Issue Resolution & Enterprise Architecture: o Diagnose and resolve intricate Git-related problems across diverse environments, including production systems. o Design and implement scalable and secure Git architectures for complex, distributed environments. 3. GitLab/GitHub Administration & Security Architecture: o Design and implement advanced GitLab/GitHub configurations, including custom hooks, security policies, and enterprise-level integrations. o Design and implement security architectures for Git services, ensuring compliance with industry regulations and organizational policies. 4. CI/CD Pipeline Architecture & Optimization: o Architect and optimize complex CI/CD pipelines for high-performance deployments, including advanced testing strategies and IaC integrations. o Design and implement end-to-end CI/CD ecosystems, integrating Git with other development tools and platforms. 5. Automation & Integration Strategy & Implementation: o Develop and maintain complex automation scripts and integrations using relevant languages. o Develop and implement an automation strategy for Git-related tasks and workflows, and design integrations with enterprise systems. 6. Linux & System Engineering & Infrastructure Planning: o Expert-level troubleshooting and performance tuning of Linux systems related to Git services. o Plan and manage the capacity and performance of Git infrastructure to support future growth and develop disaster recovery plans. 7. Technical Leadership, Collaboration, & Mentorship: o Provide technical guidance and mentorship to development teams and junior engineers. o Lead cross-functional efforts and establish relationships with key stakeholders. 8. Tooling & Technology Evaluation & Integration: o Develop and implement custom integrations with IDEs and other development tools and evaluate and recommend new Git-related technologies. o Develop and maintain a roadmap for Git tooling and technology adoption. 9. Performance & Scalability Optimization & Planning: o Implement performance monitoring and tuning strategies for Git services, and architect highly performant and scalable Git environments. o Conduct performance testing and analysis. 10. Comprehensive Documentation & Training: o Create and maintain comprehensive technical documentation, including architecture diagrams, troubleshooting guides, and best practices. o Develop and deliver training programs on Git and related technologies. Preferred Qualifications Advanced Repository Management: • Extensive experience with advanced repository management tools, including Google's Repo Tool and other distributed repository management systems. Enterprise-Level On-Premise Git Infrastructure: • Proven experience designing, deploying, and maintaining large-scale, on-premise GitLab/GitHub installations and their underlying infrastructure. Strategic Cross-Functional Collaboration & Communication: • Exceptional communication and interpersonal skills, with a proven ability to collaborate effectively with diverse stakeholders at all organizational levels. Broad Version Control Expertise & Migration: • Deep understanding of various version control systems and experience leading large-scale version control migrations. Advanced Monitoring, Performance Engineering, & Security: • Expertise in designing and implementing advanced monitoring and performance tuning strategies for high-performance Git services. • Strong understanding of security best practices and experience with security audits and compliance. Containerization and Orchestration: • Experience with containerization technologies (Docker, Kubernetes) and their integration with Git workflows. Your benefits: • Possibilities for career development in an international company • Ability to work remotely a few days per week • Professional development based on up-to-date training opportunities and expertise • Competitive remuneration package with a pool of benefits and incentives schemes at a place • Open and friendly working atmosphere • Social activities More Good Reasons to Work for Visteon Focusing on the Future Our company strategy focuses on leading the evolution of automotive digital cockpits and safety solutions. This strategy is driven by constant innovation, and you will support our efforts through your role. We are recognized across the industry for innovation. We have a strong book of business that is expected to drive future growth, along with a customer base that includes almost every automotive manufacturer in the world. Company Culture Working at Visteon is a journey in which our employees can develop their strengths and advance their careers while making a difference globally. Join us and help change the world and how we interact with our vehicles. Visteon is where the best technical talent creates the future. Learn more about our culture here. About Visteon Visteon is a global technology company serving the mobility industry, dedicated to creating a more enjoyable, connected and safe driving experience. The companys platforms leverage proven, scalable hardware and software solutions that enable the digital, electric, and autonomous evolution of our global automotive customers. Visteon products align with key industry trends and include digital instrument clusters, displays, Android-based infotainment systems, domain controllers, advanced driver assistance systems and battery management systems. The company is headquartered in Van Buren Township, Michigan, and has approximately 10,000 employees at more than 40 facilities in 18 countries. Visteon reported sales of approximately $2.8 billion and booked $5.1 billion of new business in 2021.Learn more at www.visteon.com. Follow Us For more information about our company, technologies and products, follow us on LinkedIn, Twitter, Facebook, YouTube and Instagram. You can also follow our careers-focused channels on Twitter and Facebook to keep up with our latest job postings and the great work our employees are doing.

Posted 1 week ago

Apply

6.0 - 8.0 years

25 - 30 Lacs

Bengaluru

Work from Office

As a Data Architect Data Engineering, you will play a key role in shaping Client Life Sciences modern data ecosystem that integrates Palantir Foundry, AWS cloud services, and Snowflake. Your responsibility will be to architect scalable, secure, and high-performing data pipelines and platforms that power advanced analytics, AI/ML use cases, and digital solutions across the enterprise. You will lead design efforts and provide architectural governance across data ingestion, transformation, storage, and consumption layers ensuring seamless interoperability across platforms while enabling compliance, performance, and cost-efficiency. PURPOSE OF THE POSITION: The role aims to define and implement future-ready, cloud-native, and platform-agnostic data architecture that unifies business, scientific, and operational data assets. You will enable Clients Life Science organization to make informed, data-driven decisions at scale, while ensuring architectural alignment with security, governance, and business agility. ROLES & RESPONSIBILITIES: Design and maintain an integrated data architecture that connects Palantir Foundry, AWS services, and Snowflake, ensuring secure, scalable, and performant data access. Define and govern enterprise-wide standards for data modeling, metadata, lineage, and security across hybrid environments. Architect high-throughput data pipelines that support batch and real-time ingestion from APIs, structured/unstructured sources, and external platforms. Collaborate with engineering, analytics, and product teams to implement analytical-ready data layers across Foundry, Snowflake, and AWS-based lake houses. Define data optimization strategies including partitioning, clustering, caching, and materialized views to improve query performance and reduce cost. Ensure seamless data interoperability across Palantir objects (Quiver, Workshop, Ontology), Snowflake schemas, and AWS S3-based data lakes. Lead DataOps adoption including CI/CD for pipelines, automated testing, quality checks, and monitoring. Govern identity and access management using platform-specific tools (e. g. , Foundry permissioning, Snowflake RBAC, AWS IAM). Drive compliance with data governance frameworks, including auditability, PII protection, and regulatory requirements (e. g. , GxP, HIPAA). Evaluate emerging technologies (e. g. , vector databases, LLM integration, Data Mesh) and provide recommendations. Act as an architectural SME during Agile Program Increment (PI) and Sprint Planning sessions. EDUCATION & CERTIFICATIONS: Bachelor s or Master s degree in Computer Science, Data Engineering, or a related field. AWS/ Palantir/ Snowflake Architect or Data Engineer Certification (preferred). EXPERIENCE: 6-8 Years of data engineering and architecture experience, with at least: Hands-on experience with Palantir Foundry (Quiver pipelines, Workshop, Ontology design). Working knowledge on AWS data services (S3, Glue, Redshift, Lambda, IAM, Athena). Working with Snowflake (warehouse design, performance tuning, secure data sharing). Domain experience in life Science/Pharma or regulated environments preferred. TECHNICAL SKILLS: Data Architecture: Experience with hybrid data lake/data warehouse architecture, semantic modeling, and consumption layer design. Palantir Foundry: Proficiency in Quiver pipelines, Workshop applications, Ontology modeling, and Foundry permissioning. Snowflake: Deep understanding of virtual warehouses, time travel, data sharing, access controls, and cost optimization. AWS: Strong experience with S3, Glue, Redshift, Lambda, Step Functions, Athena, IAM, and monitoring tools (e. g. , CloudWatch). ETL/ELT: Strong background in batch/streaming data pipeline development using tools such as Airflow, dbt, or NiFi. Programming: Python, SQL (advanced), Shell scripting; experience with REST APIs and JSON/XML formats. Data Modeling: Dimensional modeling, third-normal form, NoSQL/document structures, and modern semantic modeling. Security & Governance: Working knowledge of data encryption, RBAC/ABAC, metadata catalogs, and data classification. DevOps & DataOps: Experience with Git, Jenkins, Terraform/CloudFormation, CI/CD for data workflows, and observability tools. SOFT SKILLS: Strategic and analytical thinking with a bias for action. Strong communication skills to articulate architecture to both technical and business audiences. Ability to work independently and collaboratively in cross-functional, global teams. Strong leadership and mentoring capability for junior engineers and architects. Skilled in stakeholder management, technical storytelling, and influencing without authority. GOOD-TO-HAVE SKILLS: Experience with Data Mesh or federated governance models. Integration of AI/ML models or LLMs with enterprise data architecture. Familiarity with business intelligence platforms (e. g. , Tableau, Power BI) for enabling self-service analytics. Exposure to vector databases or embedding-based search systems. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Hi, We are looking for Production Support. 5 to 7 yrs exp with L1 / L2 Production Support • Knowledge of SQL, PLSQL, Unix script (say 5/10) • Very good spoken and written communication • Good Attitude and willing to learn • Should have interacted / worked directly with clients (expected to join calls with stakeholders in UK & provide updates) • Need to be based out of Bangalore and expected to work from office 2-3 days a week • Will need to work on shifts when needed (provide outside office hours oncall support) • Working knowledge with tools like Jira & Confluence ( Good To have )

Posted 1 week ago

Apply

10.0 - 15.0 years

4 - 8 Lacs

Pune

Work from Office

In this role, as a subject matter expert, you will be the key player in our transformation and improvement programs. You will support us in connecting the dots between the digital world and core finance processes. This will require a thorough understanding of business processes, best practices, the latest developments, and benefits new tools can bring to VI. Next to business-oriented consulting skills, strong communication skills are essential, enabling us to put the plan into action together with our Global teams. You are a team player and, at the same time, able to deliver independently. Having strong analytical skills and a proactive can-do mentality. Well give you the opportunity to grow your network, broaden your experience and expand your horizons in a fast-growing global environment. Your department and scope of activities The scope of your role is global. Hierarchically, you will be part of the Global Transformation Office based in Veghel, the Netherlands and will report into the Global Process Owner Record-to-Report, who is leading transformation and change. We foster a flexible yet critical approach, emphasizing an end-to-end mindset, deep process knowledge, and a strong understanding of the business. We are expected to be highly skilled professionals with a deep understanding of finance, business, and technology. The role requires a combination of strategic thinking, analytical skills, and technical knowledge to design and implement solutions that support the organization's financial objectives Your role & responsibilities Process Focus: Advisor to a broad range of Stakeholders both in and outside finance. Process Improvement: Drive standardization and initiate improvements within Record to Report, using end-to-end expertise to enhance processes and tools. Cross-Functional Guidance: Provide expertise on Record to Report processes and offer guidance to related areas like Source to Pay, Lead to Cash, and Hire to Retire. KPI Management: Monitor and drive performance based on defined KPIs. Technology Focus: Finance Architecture: Contribute to developing and managing finance architecture, including processes, systems, and data, to align with business goals. Solution Implementation: Collaborate with IT and cross-functional teams to deliver technically sound, sustainable financial solutions. Change & Risk Management: Stay updated on new technological developments, manage architecture changes, and advise on priorities and risks. Continuous improvement focus: Identify, evaluate and drive opportunities for process optimization. General Global Alignment: Collaborate with global teams, including peers in the US and India, on Record to Report transformation projects. Qualifications Education: Master's degree in finance, Accounting, Business, or a related field (MBA or relevant certifications preferred); Experience: At least 10 years of working experience in record to report; Experience with financial systems and processes, especially with modern ERP / EPM solutions (e.g., Oracle Cloud EPM/ERP, SAP); Proven success in leading or participating in transformational finance projects, ideally in a global, multi-entity organization; Experienced in analyzing, redesigning, and implementing finance processes using best practices, with exposure to modern digital tools like Cloud platforms, AI, RPA, and Power Automate being a plus. Skills: Strong analytical and problem-solving skills; Exceptional communication skills, capable of explaining complex concepts to both technical and non-technical stakeholders; Excellent interpersonal skills, confident in building lasting business relationships; You have a result-oriented mindset, are independent, pro-active, innovative and take ownership; Proficient in implementing continuous improvement methodologies such as PDCA, Kaizen, and Lean principles to drive operational excellence; Be fluent in English (written and verbal)

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Noida, Pune, Bengaluru

Hybrid

Informatica Developer- Exp- 5 to 12 years Location- Hyderabad, Bangalore, Pune, Noida, Kolkata, Chennai, Mumbai Skills- Informatica Power center, ETL, SQL, RDBMS concepts, Informatica, Shell scripting, Java and Python Informatica Developer with 5-7 years of experience of ETL RDBMS concepts, Informatica, Shell scripting, Java and Python to support the applications in the Surveillance area Duties and Responsibilities Support the applications in the Surveillance area Build enhance applications in the Surveillance area as needed Support Develop ETL Data loads on Informatica Shell Scripts Work with users stakeholders including application owners for upstream systems to resolve any support issues Take full ownership of issues that arise provide analysis of issues escalate them as and when necessary and take them through to resolution within defined SLAs Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies