Jobs
Interviews

3148 Elasticsearch Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

JOB PURPOSE: Reporting to the Director, DevSecOps & SRE , the DevSecOps Engineer will be responsible for: Design, implement and monitor enterprise-grade secure fault-tolerant infrastructure. Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. In the role as a DevSecOps Engineer, we believe that you are bringing experience of Operations and Security using DevOps. Strong analytical and automation skills that enable you to deliver the expected benefits to the business and digital products. Building and deploying distributed applications and big data pipelines in the cloud brings you excitement. You will be working with GCP and AWS. Jenkins, Groovy scripting, Shell scripting, Terraform, Ansible or an equivalent are a wide array of tools that you have used in the past. This is an exciting opportunity to influence and build the DevSecOps framework for leading Manufacturing platforms in Autonomous Buildings space, while working with the latest technologies on a cloud-based environment in a multi-disciplinary team with platform architects, tech leads, data scientists, data engineers, and insight specialists . JOB RESPONSIBILITIES: Design, implement and monitor enterprise-grade secure fault-tolerant infrastructure Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. These best practices should support traceability & auditability of change. Ensure continuous availability of various DevOps tools supporting SCM & Release Management including Source Control, Containerization, Continuous Integration, & Change Management. (Jenkins, Docker, JIRA, SonarQube, Terraform, Google/Azure/AWS Cloud CLI). Implementing Build and release automated pipelines framework Implementing DevSecOps Tools and Quality Gates with SLO Implementing SAST, DAST, IAST, OSS tools in CICD Pipelines Implementing Automated change management policies in the pipeline from Dev-Prod. Work with cross-functional co-located teams in design, development and implementation of enterprise scalable features related to enabling higher developer productivity, environment monitoring and self-healing, and facilitating autonomous delivery teams. Build infrastructure automation tools and frameworks leveraging Docker, Kubernetes operate as a technical expert on DevOps infrastructure projects pertaining to Containerization, systems management, design and architecture. Perform performance analysis and optimization, monitoring and problem resolution, upgrade planning and execution, and process creation and documentation. Integrate newly developed and existing applications into private, public and hybrid cloud environments Automate deployment pipelines in a scalable, secure and reliable manner Leverage application monitoring tools to troubleshoot and diagnose environment issues Have a culture of automation where any repetitive work is automated Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. These best practices should support traceability & auditability of change. Working closely with Cloud Infrastructure and Security teams to ensure organizational best practices are followed Translating non-functional requirements of Development, Security, and Operations architectures into a design that can be implemented using the chosen set of software for the project. Ownership of technical design and implementation for one or more software stacks of the DevSecOps team. Design and implementation of the distributed code repository. Implementing automation pipelines to support code compilation, testing, and deployment into the software components of the entire solution. Integrating the monitoring of all software components in the entire solution, and data mining the data streams for actionable events to remediate issues. Implement configuration management pipelines to standardize environments. Integrate DevSecOps software with credentials management tools. Create non-functional test scenarios for verifying the DevSecOps software setup. KEY QUALIFICATION & EXPERIENCES: At least 5 years of relevant working experience in DevSecOps, Task Automation, or GitOps. Demonstrated proficiency in installation, configuration, or implementation in one or more of the following software. Jenkins, Azure DevOps, Bamboo, or software of similar capability. GitHub, GitLab, or software of similar capability. Jira, Asana, Trello, or software of similar capability. Ansible, Terraform, Chef Automate, or software of similar capability. Flux CD, or software of similar capability. Any test automation software. Any service virtualization software. Operating Software administration experience for Ubuntu, Debian, Alpine, RHEL. Technical documentation writing experience. DevOps Engineering certification for on-premises or public cloud is advantageous. Experience with work planning and effort estimation is an advantage. Strong problem-solving and analytical skills. Strong interpersonal and written and verbal communication skills. Highly adaptable to changing circumstances. Interest in continuously learning new skills and technologies. Experience with programming and scripting languages (e.g. Java, C#, C++, Python, Bash, PowerShell). Experience with incident and response management. Experience with Agile and DevOps development methodologies. Experience with container technologies and supporting tools (e.g. Docker Swarm, Podman, Kubernetes, Mesos). Experience with working in cloud ecosystems (Microsoft Azure, AWS, Google Cloud Platform). Experience with configuration management systems (e.g. Puppet, Ansible, Chef, Salt, Terraform). Experience working with continuous integration/continuous deployment tools (e.g. Git, TeamCity, Jenkins, Artifactory). Experience in GitOps-based automation is a plus. Experience with GitHub for Actions, GitHub for Security, GitHub Copilot. BE/B-Tech/MCA or any equivalent degree in Computer Science OR related practical experience. Must have 5+ years working experience in Jenkins, GCP (or AWS/Azure), Unix & Linux OS. Must have experience with automation/configuration management tools (Jenkins using Groovy scripting, Terraform, Ansible, or an equivalent). Must have experience in Kubernetes (GKE, Kubectl, Helm) and containers (Docker). Must have experience on JFrog Artifactory and SonarQube. Extensive knowledge of institutionalizing Agile and DevOps tools not limited to but including Jenkins, Subversion, Hudson, etc. Experience in Networking Skills (TCP/IP, SSL, SMTP, HTTP, FTP, DNS, and more). Hands-on in source code management tools like Git, Bitbucket, SVN, etc. Should have working experience with monitoring tools like Grafana, Prometheus, Elasticsearch, Splunk, or any other monitoring tools/processes. Experience in Enterprise High Availability Platforms and Network and Security on GCP. Knowledge and experience in the Java programming language. Experience working on large-scale distributed systems with a deep understanding of design impacts on performance, reliability, operations, and security is a big plus. Understanding of self-healing/immutable microservice-based architectures, cloud platforms, clustering models, networking technologies. Great interpersonal and communication skills. Self-starter and able to work well in a fast-paced, dynamic environment with minimal supervision. Must have Public Cloud provider certifications (Azure, GCP, or AWS). Having CNCF certification is a plus

Posted 3 weeks ago

Apply

9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Technical Architect - Cloud Location: Hinjewadi, Pune Experience: 9+ years BASIC PURPOSE: Lead Software Engineer - Cloud, who will be responsible for the plan, design, as well as deployment automation of platform solutions on AWS. Instrumental in profiling and improving front-end and back-end application performance, mentor team members and take end to end technical ownership of applications. Must be able to stay on top of technology changes in the market and continuously look for opportunities to leverage new technology. ESSENTIAL FUNCTIONS: · Design, build and implement performant and robust cloud platform solutions. · Design and build data pipelines for supporting analytical solutions. · Provide level of effort estimates to support planning activities. · Provide microservices architecture and design specifications. · Fix defects found during implementation process or reported by the software test team. · Support software process definition and improvement initiatives and release process working with DevOps team in CI/CD pipelines developed with Terraform and CDK as Infrastructure-as-Code. Execute security architectures for cloud systems. · Understand and recognize the quality consequences which may occur from the improper performance of their specific job; has awareness of system defects that may occur in their area of responsibility, including product design, verification, and validation, and testing activities. · Mentor less experienced team members. · Collaborate with Product Designers, Product Managers, Architect and Software Engineers to deliver compelling user-facing products. REPORTING RELATIONSHIPS : · Reports to Technical Architect QUALIFICATIONS: · Bachelor’s degree in computer science / related engineering field OR equivalent experience · In related field. · 10+ years of experience in cloud application development. · Expert proficiency in JavaScript / Typescript and/or Java with Spring Boot or Quarkus. · Experience in architecting and developing event driven cloud-based solutions. · Experience in AWS services including API Gateway, AppSync, Amplify, S3, CloudFront, Lambda, ECS/Fargate, Step Functions, SQS, Event Bridge, Cognito, Dynamo, Aurora PostgreSQL, OpenSearch/Elasticsearch, AWS Pinpoint. · Extensive experience in developing applications in POSIX compliant environments. · Strong knowledge of containerization, with expert knowledge of either Docker or Kubernetes. · Proficient in IAM security and AWS Networking. · Expert understand of building and working with CI/CD pipelines. · Experience in designing, developing and creating data pipelines, data warehouse applications and analytical solutions including machine learning. · Deep cloud domain expertise in: architecture, big data, microservice architectures, cloud technologies, data security and privacy, tools, and testing · Excellent programming skills in data pipeline technologies like Lambda, Kinesis, S3, EventBridge and MSK · Extensive experience with Service Oriented Architecture, microservices, virtualization and working with relational databases and non-relational databases. · Excellent knowledge of building big data solutions using NoSQL databases. · Experience with secure coding best practices and methodologies, vulnerability scans, threat modeling, and cyber-risk assessments. · Familiar with modern build pipelines and tools · Ability to understand business requirements and translate them into technical designs · Familiarity with Git code versioning tools · Good written, verbal communication skills · Great team player PREFERRED SKILLS: · Experience with RDBMS and is a plus · Experience in Java, .NET, Python is a plus · Experience in big data solutions and analytics; using BI tools like Power BI or AWS QuickSight is a plus · Experience with other cloud computing platforms · Azure or AWS Certification such as a Solutions Architect Expert, Azure Fundamentals, data scientist, developer, etc. CRITICAL COMPETENCIES FOR SUCCESS: Analytical Skills: Demonstrate aptitude towards analytical and problem-solving skills and the ability to conceptually pull together patterns or connections that are not clearly related; ability to assess relevant facts, identify alternative approaches and provide the best course of action. Strategic Agility: Eagerness and ability to learn quickly and leverage a flexible mindset in response to shifting dynamics, adversity, and/or change; continually pushes oneself, their teams, and their businesses to learn, to generate new ideas * Disciplined Execution: Orientation towards a process-focused, decisive course of action that will ensure client/customer needs are met with a high standard of excellence, urgency and predictability; focused on the task at hand in the face of ambiguity, and applies past experiences and expertise to consistently pull through results. * Organizational Collaboration: Ability to partner across organizational lines and work cooperatively within and outside one’s own team in order to best serve client needs and exceed the expectations of end customers and clients; actively supports key decisions and promote a spirit of teamwork to demonstrate the commitment to the company. WORK CONDITIONS: · Must possess comfort in learning, training, and engaging with others virtually through Microsoft Teams and Zoom · M ust be able to perform the essential functions of the job, with or without reasonable accommodation. Requirements · Bachelor’s degree in computer science / related engineering field OR equivalent experience · in related field. · 10+ years of experience in cloud application development. · Expert proficiency in JavaScript / Typescript and/or Java with Spring Boot or Quarkus. · Experience in architecting and developing event driven cloud-based solutions. · Experience in AWS services including API Gateway, AppSync, Amplify, S3, CloudFront, Lambda, ECS/Fargate, Step Functions, SQS, Event Bridge, Cognito, Dynamo, Aurora PostgreSQL, OpenSearch/Elasticsearch, AWS Pinpoint. · Extensive experience in developing applications in POSIX compliant environments. · Strong knowledge of containerization, with expert knowledge of either Docker or Kubernetes. · Proficient in IAM security and AWS Networking. · Expert understand of building and working with CI/CD pipelines. · Experience in designing, developing and creating data pipelines, data warehouse applications and analytical solutions including machine learning. · Deep cloud domain expertise in: architecture, big data, microservice architectures, cloud technologies, data security and privacy, tools, and testing · Excellent programming skills in data pipeline technologies like Lambda, Kinesis, S3, EventBridge and MSK · Extensive experience with Service Oriented Architecture, microservices, virtualization and working with relational databases and non-relational databases. · Excellent knowledge of building big data solutions using NoSQL databases. · Experience with secure coding best practices and methodologies, vulnerability scans, threat modeling, and cyber-risk assessments. · Familiar with modern build pipelines and tools · Ability to understand business requirements and translate them into technical designs · Familiarity with Git code versioning tools · Good written, verbal communication skills · Great team player Benefits Health Insurance

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

About QualityKiosk Technologies QualityKiosk Technologies is one of the world's largest independent Quality Engineering (QE) providers and digital transformation enablers, helping companies build and manage applications for optimal performance and user experience. Founded in 2000, the company specializes in providing quality engineering, QA automation, performance assurance, intelligent automation (IA) and robotic process automation (RPA), customer experience management, site reliability engineering (SRE), digital testing as a service (DTaaS), cloud, and data analytics solutions and services. With operations spread across 25+ countries and a workforce of more than 4000 employees, the organization enables some of the leading banking, e-commerce, automotive, telecom, insurance, OTT, entertainment, pharmaceuticals, and BFSI brands to achieve their business transformation goals. QualityKiosk Technologies has been featured in renowned global advisory firms' reports, including Forrester, Gartner, The Everest Group, and Hurun Report, for its innovative, IP-led quality assurance solutions and the positive impact it has created for its clients in the fast-changing digital landscape. QualityKiosk, which offers automated quality assurance solutions for clients across geographies and verticals, counts 50 of the Indian Fortune 100 companies and 18 of the global Fortune 500 companies as its key clients. The company is banking on its speed of execution and technology advancement as key factors to drive a 5X growth in the next five years, both in revenues and number of employees. Key Responsibilities: - Design, implement, and optimize Elasticsearch clusters and associated applications. - Develop and maintain scalable and fault-tolerant search architectures to support large-scale data. - Troubleshoot performance and reliability issues within Elasticsearch environments. - Integrate Elasticsearch with other tools like Logstash, Kibana, Beats, etc. - Implement search features such as auto-complete, aggregations, fuzziness, and advanced search functionalities. - Manage Elasticsearch data pipelines and work on data ingest, indexing, and transformation. - Monitor, optimize, and ensure the health of Elasticsearch clusters and associated services. - Conduct capacity planning and scalability testing for search infrastructure. - Ensure high availability and disaster recovery strategies for Elasticsearch clusters. - Collaborate with software engineers, data engineers, and DevOps teams to ensure smooth deployment and integration. - Document solutions, configurations, and best practices. - Stay updated on new features and functionalities of the Elastic Stack and apply them to enhance existing systems. - Grooming of freshers and junior team members to enable them to take up responsibilities Required Skills & Qualifications: - Experience: 3 years of hands-on experience with Elasticsearch and its ecosystem (Elasticsearch, Kibana, Logstash, Fleet server, Elastic agents, Beats). - Core Technologies: Strong experience with Elasticsearch, including cluster setup, configuration, and optimization. - Search Architecture: Experience designing and maintaining scalable search architectures and handling large datasets.

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Associate Software Developer As a Fullstack SDE1 at NxtWave, you Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. Build reusable, maintainable frontend components using modern state management practices. Develop backend services in Node.js or Python, adhering to clean-architecture principles. Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. Containerize applications and configure CI/CD pipelines for automated builds and deployments. Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills 1+ years of experience building full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. Hands-on with state management patterns (Redux, MobX, or custom solutions). Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). Expertise in designing REST and/or GraphQL APIs and integrating with backend services. Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ‘The Greatest Brand in Education’ in a research-based listing by URS Media NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Associate Software Developer As a Fullstack SDE1 at NxtWave, you Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. Build reusable, maintainable frontend components using modern state management practices. Develop backend services in Node.js or Python, adhering to clean-architecture principles. Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. Containerize applications and configure CI/CD pipelines for automated builds and deployments. Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills 1+ years of experience building full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. Hands-on with state management patterns (Redux, MobX, or custom solutions). Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). Expertise in designing REST and/or GraphQL APIs and integrating with backend services. Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ‘The Greatest Brand in Education’ in a research-based listing by URS Media NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred Technical And Professional Experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks

Posted 3 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

Gurugram, Haryana

On-site

Location Gurugram, Haryana, India This job is associated with 2 categories Job Id GGN00001808 Information Technology Job Type Full-Time Posted Date 08/05/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values: At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job overview and responsibilities United Airlines’ Customer Technology Platform department partners with business and technology leaders across the company to create services and applications across key airline functionalities such as Booking, Check-in, Payment, Reservation systems and Operational Systems to help support the commercial and digital Channels of United. As a senior software developer, you will be responsible for the development of mission critical applications, while working with a team of developers. You will design, develop, document, test and debug new and existing applications. Additionally, as a senior developer, you will build these applications with a focus on delivering cloud-based solutions. The individual will use leading edge technologies and enterprise grade integration software daily. You will be relied upon to help take this team to the next level from a technological standpoint. Participate in full development life cycle including requirements analysis and design Serve as technical expert on development projects Write technical specifications based on conceptual design and stated business requirements Support, maintain, and document software functionality Identify and evaluate new technologies for implementation Analyze code to find causes of errors and revise programs as needed Participate in software design meetings and analyze user needs to determine technical requirements Consult with end user to prototype, refine, test, and debug programs to meet needs Recognized as expert in field, knowledgeable of emerging trends and industry practices Conducts the most complex and vital work critical to the organization Works without supervision with complete latitude for independent judgment May mentor less experienced peers and display leadership as needed This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications Required Bachelor’s degree or higher in Computer Science, Computer Engineering, Electrical Engineering, Management Information Systems and/or equivalent work experience 5+ years of experience in design, development, documenting, testing, and debugging of new and existing software systems and/or applications for market sale or large-scale proprietary software for internal use 5+ years of experience with Software Development Languages & Tools, C#, C++, ASP.NET/MVC, Web API, WCF, MS Visual Studio, MS TFS, GitHub Strong knowledge of Microsoft .NET Framework, Microsoft NET Core, SQL, NoSQL and Design Patterns Excellent knowledge of Object-Oriented systems design, Application Development, Messaging and In-memory Distributed Caching platforms (like CouchBase) Proficiency in Software Development best practices such as - Continuous Integration, Unit / Integration testing, Code reviews Thorough knowledge and experience with Microsoft Technology Stack, Windows Server OS (2012, 2016), IIS 7.5 – 10.0, MSMQ, NET Framework 4.x 1+ year of experience with the Cloud Computing – AWS, Azure, NET Core 2.x 1+ year of experience with NoSQL Database such as Couchbase, MongoDB, ElasticSearch and Distributed Queues such as Apache Kafka 1+ year of experience with Cloud Computing – AWS, OpenShift, Kubernetes and Docker Containers Worked closely with the architect for development of applications 3 years of IT experience developing business critical applications Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Preferred Master's Degree in Computer Science, Information Systems Airlines industry experience Common Use (CUTE, CUPPS, BCBP, AIDX) IATA industry experience Experience with PCI DSS Experience with Functional Programming

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title : Principal / Senior Software Engineer (Java). Job Location : Baner, Pune, Maharashtra. Domain : Security. About The Role Are you a passionate Software Engineer who has a proven track record of solving complex problems and being at the forefront of innovation? Pursuing a career at our client will allow you to write code and manipulate data in ways that have never been done before, driving automation of threat detection and response for one of the worlds fastest growing industries. You will lead the creation, testing, and deployment of cutting-edge security technology to enterprise customers across the globe. Above all else, this role will allow you to work and learn from some of the most talented people in the business as well as have a direct contribution to the growth and success of the company. The Everyday Hustle Research and develop creative solutions across a wide range of cutting-edge technologies to continuously evolve our client's platform. Create REST APIs and integrations between various products to improve and automate our customers threat detection. Manage the continuous integration and deployment processes of complex technologies. Perform code reviews to ensure consistent improvement. Proactively automate and improve all stages of the software development lifecycle. Interface closely with various parts of the business, both internally and externally, to ensure all users are leveraging the product with ease and to its full potential. Provide training and support to other team members as well as cultivate a culture of constant collaboration. Do you have what it takes? 5+ Years of Software Development experience in Java, Springboot, microservices. Must be proficient in the English language, both written and verbal. Knowledge and ability to apply application security principles to our software process. What makes you uncommon? Hands on experience with one or more of the following technologies (Elasticsearch, Kafka, Apache Spark, Logstash, Hadoop/hive, TensorFlow, Kibana, Athena/Presto/BigTable, Angular, React). Experience with cloud platforms such as AWS, GCP, or Azure. Solid understanding of unit testing, continuous integration and deployment practices. Experience with Agile Methodology. Higher education/relevant certifications. (ref:hirist.tech)

Posted 3 weeks ago

Apply

4.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Big Data Architect with 4 years of experience, you will be responsible for designing and implementing scalable solutions using technologies such as Spark, Scala, Hadoop MapReduce/HDFS, PIG, HIVE, and AWS cloud computing. Your role will involve hands-on experience with tools like EMR, EC2, Pentaho BI, Impala, ElasticSearch, Apache Kafka, Node.js, Redis, Logstash, statsD, Ganglia, Zeppelin, Hue, and KETTLE. Additionally, you should have sound knowledge in Machine learning, Zookeeper, Bootstrap.js, Apache Flume, FluentD, Collectd, Sqoop, Presto, Tableau, R, GROK, MongoDB, Apache Storm, and HBASE. To excel in this role, you must have a strong background in development with both Core Java and Advanced Java. A Bachelor's degree in Computer Science, Information Technology, or MCA is required along with 4 years of relevant experience. Your analytical and problem-solving skills will be put to the test as you tackle complex data challenges. Attention to detail is crucial, and you should possess excellent written and verbal communication skills. This position requires you to work independently while also being an effective team player. With 10 years of overall experience, you will be based in either Pune or Hyderabad, India. Join us in this dynamic role where you will have the opportunity to contribute to cutting-edge data architecture solutions.,

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As Animaker's growth is skyrocketing, we aim to establish Animaker as the world's leading platform for animation and video creation. We are seeking individuals who are enthusiastic about making a significant impact through their consistent efforts to bring about a positive change in our projects, teams, company, and the industry as a whole. Our ultimate goal is to revolutionize the world of video creation, one project at a time. The ideal candidate will be responsible for building scalable web applications and integrating UI elements with server-side functionalities. They will also handle security and data protection measures, ensuring the integrity and confidentiality of our systems. Additionally, the candidate will be required to integrate frontend components with both SQL and NoSQL databases, guaranteeing consistent data flow and optimal performance. Key responsibilities will include developing reusable code and libraries, optimizing code to enhance its quality, and meeting the specified requirements. The candidate should possess a minimum of 4 years of backend development experience in Python, demonstrating proficiency in programming and familiarity with ORM libraries and frameworks such as Django and Flask API. Experience with essential tools like Git, Jenkins, Ansible, and Rundeck is essential for this role. Moreover, the candidate should have expertise in cross-browser and cross-platform front-end development, utilizing technologies like HTML, CSS, and JavaScript within frameworks like Angular. Proficiency in SQL and experience working with relational databases such as PostgreSQL and MySQL is crucial. Familiarity with Elasticsearch for full-text search and indexing, as well as experience in containerization using Docker, are highly desirable skills. A solid understanding of CI/CD pipelines, including tools like Jenkins and Docker, is necessary for streamlining the development and deployment processes. Basic proficiency in JavaScript for integrating backend services with frontend components is also required. Familiarity with version control systems like Git and collaborative workflows will be advantageous for effective teamwork and project management.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

navi mumbai, maharashtra

On-site

You should have expertise in Elasticsearch, Logstash, Kibana, database management, performance tuning, monitoring, filebeat, and bitSets. Your responsibilities will include end-to-end implementation of the ELK Stack, covering Elasticsearch, Logstash, and Kibana. You should have a good understanding of Elasticsearch cluster, shards, replica, configuration, API, local gateway, mapping, indexing, operations, transaction logs, Lucene Indexing, multiple indices, index aliases, cross-index operations, configuration options, mappings, APIs, available settings, search query DSL, search components, aggregations, search types, highlighting, filebeat, bitSets, Lucene, aggregations, nested document relations, cluster state recovery, low-level replication, low-level recovery, shared allocation, performance tuning focusing on data flow and memory allocation, Kibana, cluster monitoring, Hadoop environment, infrastructure, tuning, troubleshooting of Elasticsearch and its stack, operating systems, networks, security, upgrade of Elasticsearch version, Kibana, Logstash, and collaborating with infrastructure, network, database, application, and business intelligence teams to ensure high data quality and availability. Your troubleshooting skills for Elasticsearch should be excellent.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Cloud Application Developer at our organization, you will have the opportunity to help design, develop, and maintain robust cloud-native applications in an as-a-service model on a Cloud platform. Your responsibilities will include evaluating, implementing, and standardizing new tools and solutions to continuously improve the Cloud Platform. You will leverage your expertise to drive the organization and departments" technical vision in development teams. Additionally, you will liaise with global and local stakeholders to influence technical roadmaps and passionately contribute towards hosting a thriving developer community. Encouraging contribution towards inner and open-sourcing will be a key aspect of your role. To excel in this position, you should have experience and exposure to good programming practices, including coding and testing standards. Your passion and experience in proactively investigating, evaluating, and implementing new technical solutions with continuous improvement will be highly valued. Possessing a good development culture and familiarity with industry-wide best practices is essential. A production mindset with a keen focus on reliability and quality is crucial, along with a passion for being part of a distributed, self-sufficient feature team with regular deliverables. You should be a proactive learner and continuously enhance your skills in areas such as Scrum, Data, and Automation. Your strong technical ability to monitor, investigate, analyze, and fix production issues will be essential in this role. You should also have the ability to ideate and collaborate through inner and open-sourcing and interact effectively with client managers, developers, testers, and cross-functional teams like architects. Experience working in an Agile Team and exposure to Agile/SAFE development methodologies is required, along with a minimum of 5+ years of experience in software development and architecture. In terms of technical skills, you should have good experience in design and development, including object-oriented programming in Python, cloud-native application development, APIs, and microservices. Familiarity with relational databases like PostgreSQL and the ability to build robust SQL queries is essential. Knowledge of tools such as Grafana for data visualization, Elastic search, FluentD, and hosting applications using Containerization (Docker, Kubernetes) will be beneficial. Proficiency with CI/CD and DevOps tools like GIT, Jenkins, Sonar, good system skills with Linux OS, and bash scripting are also required. An understanding of the Cloud and cloud services is a must-have skill for this role. Joining our team means being part of a company that values people as drivers of change and believes in making a positive impact on the future. We encourage creating, daring, innovating, and taking action. Our employees have the opportunity to engage in solidarity actions and contribute to reducing the carbon footprint through sustainable practices. Diversity and inclusion are core values that we uphold, and we are committed to ESG practices. If you are looking to be directly involved, grow in a stimulating environment, and make a difference, you will find a home with us.,

Posted 3 weeks ago

Apply

0 years

35 - 55 Lacs

Hyderabad, Telangana, India

On-site

Company: NxtHyre Website: Visit Website Business Type: Startup Company Type: Product Business Model: B2B Funding Stage: Series B Industry: Artificial Intelligence Salary Range: ₹ 35-55 Lacs PA Job Description: This is Permanent role with a Valued client of NxtHyre in Fintech. About Client: Founded in 2019, we recently raised our Series B fundraise, led by Innovius Capital along with participation from Dell Technologies Capital, Sentinel Global, and existing investors including Venrock, NeoTribe Ventures, Engineering Capital, Workday Ventures, and KPMG Ventures. Job Description: Looking for strong candidates with a passion for participating in our mission. Areas of responsibilities (subject to change over time) Responsibilities: Developing and managing data pipelines for ML and analytics Effectively analyze and resolve engineering issues as they arise Implementing ML algorithms to classify textual categorization and information extraction Writing containerized microservices for serving the model in a production environment Writing unit tests alongside development Must-haves Python programming expertise: data structures, OOP, recursions, generators, iterators, decorators, familiarity with regular expressions Working knowledge and experience with deep learning framework Pytorch or Tensorflow. Embedding representations Familiarity with SQL database interactions Familiarity with Elasticsearch document indexing, querying Familiarity with Docker, Dockerfile Familiarity with REST API, JSON structure. Python packages like FastAPI Familiarity with git operations Familiarity with shell scripting Familiarity with PyCharm for development, debugging, profiling Experience with Kubernetes Desired NLP toolkits like NLTK, spaCy, Gensim, scikit-learn. Familiarity with basic natural language concepts, handling, Tokenization, lemmatization, stemming, edit distances, named entity recognition, syntactic parsing, etc Good knowledge and experience with deep learning framework Pytorch or Tensorflow More complex operations with Elasticsearch. Creating indices, indexable fields, etc Good experience with Kubernetes

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Front-End Developer - Credly & Faethm As a Front-End Developer working on Credly and Faethm, you will play a key role in designing and delivering exceptional user experiences across our web applications. Working closely with product teams and other stakeholders, you will use modern React.js libraries, frameworks, and development patterns to build responsive, accessible, and maintainable interfaces. Your contributions will involve everything from architecting scalable front-end solutions to integrating APIs, optimizing performance, and guiding a small team to produce high-quality, user-focused software. Minimum Requirements 5+ years of professional front-end development experience Proficiency in ES6, TypeScript, React.js, Redux, Node.js, HTML, CSS Strong knowledge of modular, maintainable, and scalable front-end architectures Experience with front-end performance optimization Familiarity with micro frontends and modern JavaScript design patterns Hands-on experience integrating RESTful services Familiarity with Postgresql and ElasticSearch Responsibilities Architect, develop, and maintain scalable front-end solutions using React.js and related technologies Guide and mentor team members with technical best practices Ensure usability, accessibility, and performance standards are met Strategize and build reusable code libraries, tools, and frameworks Integrate and optimize third-party APIs (e.g., authentication, LMS) Estimate, plan, and deliver features on schedule Collaborate with product teams and stakeholders to align on requirements and drive solutions Nice To Have Experience with Progressive Web Apps (PWAs) Experience with Ruby on Rails Who We Are At Pearson, our purpose is simple: to help people realize the life they imagine through learning. We believe that every learning opportunity is a chance for a personal breakthrough. We are the world's lifelong learning company. For us, learning isn't just what we do. It's who we are. To learn more: We are Pearson. Pearson is an Equal Opportunity Employer and a member of E-Verify. Employment decisions are based on qualifications, merit and business need. Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, sexual orientation, gender identity, gender expression, age, national origin, protected veteran status, disability status or any other group protected by law. We actively seek qualified candidates who are protected veterans and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act. If you are an individual with a disability and are unable or limited in your ability to use or access our career site as a result of your disability, you may request reasonable accommodations by emailing TalentExperienceGlobalTeam@grp.pearson.com. Job: Software Development Job Family: TECHNOLOGY Organization: Enterprise Learning & Skills Schedule: FULL_TIME Workplace Type: On-site Req ID: 17935

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Bachelor’s degree in computer science, Supply Chain, Information Systems, or related field. Minimum of 5-7 years of experience in Master Data Management or related field. 3-5 years SAP experience / ERP experience required with strong exposure to at least two of the functional areas described above - Proven experience in leading an MDM team. - Strong knowledge of data governance principles, best practices, and technologies. - Experience with data profiling, cleansing, and enrichment tools. Ability to work with cross-functional teams to understand and address their master data needs. Proven ability to build predictive analytics tools using PowerBI, Spotfire or otherwise Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Join our Team About this opportunity: The Head of Automation owns and leads Automation strategy and execution providing leadership and vision to the organization. Collaborating closely with the other Heads of Department and Individual Contributors to ensure E2E management and success of delivery. What you will do: Drive operational efficiency and productivity through quality automation models, aligning with SDE targets and boosting automation saturation in MS Networks. Focus on stable automation performance with reduced outages and stronger operational outcomes. Collaborate with SDAP for streamlined automation monitoring, issue tracking, and reporting. Align automation initiatives with BCSS MS Networks’ AI/ML strategy. Enhance communication on automation benefits and their business impact. Manage O&M, lifecycle, and performance of SL Operate tools, ensuring clear automation SLAs and effective tracking. Contribute to service architecture strategies (EOE, AAA) to maximize automation value and roadmap alignment. Institutionalize best practices and automate internal team processes to reduce manual efforts. The skills you bring: 15+ years of experience in managed services environment, with minimum 8+ years of experience in Managed Services operations University degree in Engineering, Mathematics or Business Administration, MBA is a plus. Strong grasp of managed services delivery and Ericsson SD processes. Deep understanding of operator business needs and service delivery models. Skilled in Ericsson process measurement tools and SL Operate/SDAP MSDP environments (eTiger, ACE, etc.). Technically proficient in OOAD, design patterns, and development on Unix/Linux/Windows with Java, JS, DB, Shell scripts, and monitoring tools like Nagios. Familiar with software component interactions and DevOps practices. Hands-on with automation tools (Enable, Blue Prism, MATE) and monitoring platforms (Grafana, Zabbix, Elasticsearch, Graylog). Strong experience with web/proxy/app servers (Tomcat, Nginx, Apache). Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 770844

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description and Requirements "At BMC trust is not just a word - it's a way of life!" Hybrid Description and Requirements "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The IZOT product line includes BMC’s Intelligent Z Optimization & Transformation products, which help the world’s largest companies to monitor and manage their mainframe systems. The modernization of mainframe is the beating heart of our product line, and we achieve this goal by developing products that improve the developer experience, the mainframe integration, the speed of application development, the quality of the code and the applications’ security, while reducing operational costs and risks. We acquired several companies along the way, and we continue to grow, innovate, and perfect our solutions on an ongoing basis. BMC is looking for a talented Devops Engineer to join our family who are just as passionate about solving issues with distributed systems as they are to automate, code and collaborate to tackle problem. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: As a DevOps engineer with experience in Linux systems you will be networking, monitoring and automation, containerization, cloud technologies etc., and a proven interest and experience in using software engineering to solve operational problems. Write software to automate API-driven tasks at a scale. NodeJS and Python preferred. Participate in SRE software engineering, writing code for the continuing reduction of human intervention in operational tasks and automation of processes. Manage Cloud provider infrastructure, system deployments and product release operations. Monitor the application ecosystem, responding to incidents, correcting, and improving systems to prevent incidents and planning capacity and own resolving Elasticsearch related customer issues. Participate in 24x365 on-call schedules As every BMC employee, you will be given the opportunity to learn, be included in global projects, challenge yourself and be the innovator when it comes to solving everyday problems. To ensure you’re set up for success, you will bring the following skillset & experience: You can embrace, live and breathe our BMC values every day! 3–5 years of hands-on experience in a DevOps, SRE, or Infrastructure role. Thorough understanding of logging and monitoring tools ELK Stack, Prometheus, Grafana, etc. Solid understanding of Linux system administration and basic networking. Experience with at least one scripting language (Python, Bash, etc.). Familiarity with DevOps tools such as Git, Jenkins, Docker, and CI/CD pipelines. Exposure to monitoring/logging tools like ELK Stack, Grafana, or Prometheus. You have experience using a Public Cloud: AWS, GCP, Azure or OpenStack. AWS is preferred. You have experience working remotely with a fully distributed team, with the communication and adaptability it requires. You have experience mentoring and helping folks grow their abilities to use/contribute to the tooling you help build and experience with building public cloud agnostic software CA-DNP Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 2,117,800 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply.

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Who Are We? Postman is the world’s leading API platform, used by more than 40 million developers and 500,000 organizations, including 98% of the Fortune 500. Postman is helping developers and professionals across the globe build the API-first world by simplifying each step of the API lifecycle and streamlining collaboration—enabling users to create better APIs, faster. The company is headquartered in San Francisco and has an office in Bangalore, where it was founded. Postman is privately held, with funding from Battery Ventures, BOND, Coatue, CRV, Insight Partners, and Nexus Venture Partners. Learn more at postman.com or connect with Postman on X via @getpostman. P.S: We highly recommend reading The "API-First World" graphic novel to understand the bigger picture and our vision at Postman. About Us Search Team at Postman is responsible for enabling users to quickly find and get started with the APIs that they are looking for. Postman is growing at a rapid pace, and this manifests into an ever-increasing volume of data that users create and consume, within their teams and in the Public API Network. We focus on improving discovery and ease of consumption over this data. The Role We are looking for a Senior Engineer with 6+ years of experience deep backend expertise on search and ETL systems and a strong product mindset, to lead core initiatives on our search platform. In this role, you'll work at the intersection of infrastructure, relevance, and developer experience—designing systems that power search across the platform. You’ll bring a bias for action, a strong backend foundation, and the curiosity to explore beyond traditional boundaries, including areas like high performance web services, high volume data pipelines, machine learning, and relevance tuning. What You'll Do Own end to end architecture and roadmap of search platform consisting of distributed indexing pipelines, storage infra and high performance web servers. Contribute to improving relevance of search results through signal engineering and better data models. Keep the system performant and reliable to handle growing data volume and query traffic, while unblocking business requirements and managing risk. Collaborate with cross-functional stakeholders like Product Managers, Designers, as well as other teams to drive initiatives. Uphold operational excellence in the team, showing a bias for action and user empathy. Quickly build functional prototypes to solve internal and external use-cases. Scale the technical abilities of engineers in the team and uphold quality through code reviews and mentorship. About You You have 6+ years of experience building applications in high level programming languages - Javascript, Python, Java etc. We code primarily in Python and Javascript. You have worked on customer facing search solutions and have hands-on experience with search systems like ElasticSearch/Apache Solr/OpenSearch. Experience building systems to orchestrate batch and streaming pipelines using Apache Kafka, Kinesis, Lambdas. Knowledge of compute engine/data processing engine like Apache Spark/ Apache Flink/ Ray. You have displayed strength in AI, Natural Language Processing, Ranking Systems or Recommendation Systems. You possess strong Computer Science fundamentals - Algorithms, Networking and Operating Systems, and are familiar with various programming tools, frameworks, and development practices. You write testable, maintainable code with SOLID principles that’s easy to understand. You are an excellent communicator who can articulate technical concepts to product managers, designers, support and other engineers. What Else? In addition to Postman's pay-on-performance philosophy, and a flexible schedule working with a fun, collaborative team, Postman offers a comprehensive set of benefits, including full medical coverage, flexible PTO, wellness reimbursement, and a monthly lunch stipend. Along with that, our wellness programs will help you stay in the best of your physical and mental health. Our frequent and fascinating team-building events will keep you connected, while our donation-matching program can support the causes you care about. We’re building a long-term company with an inclusive culture where everyone can be the best version of themselves. At Postman, we embrace a hybrid work model. For all roles based out of San Francisco Bay Area, Boston, Bangalore, Hyderabad, and New York, employees are expected to come into the office 3-days a week. We were thoughtful in our approach which is based on balancing flexibility and collaboration and grounded in feedback from our workforce, leadership team, and peers. The benefits of our hybrid office model will be shared knowledge, brainstorming sessions, communication, and building trust in-person that cannot be replicated via zoom. Our Values At Postman, we create with the same curiosity that we see in our users. We value transparency and honest communication about not only successes, but also failures. In our work, we focus on specific goals that add up to a larger vision. Our inclusive work culture ensures that everyone is valued equally as important pieces of our final product. We are dedicated to delivering the best products we can. Equal opportunity Postman is an Equal Employment Opportunity and Affirmative Action Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. Postman does not accept unsolicited headhunter and agency resumes. Postman will not pay fees to any third-party agency or company that does not have a signed agreement with Postman.

Posted 3 weeks ago

Apply

25.0 years

9 Lacs

Chennai

On-site

The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Summary: This job will will lead the design and implementation of complex data systems and architectures. You will work with stakeholders to understand requirements and deliver solutions. Your role involves driving best practices in data engineering, ensuring data quality, and mentoring junior engineers. Job Description: Essential Responsibilities: Lead the design and development of complex data pipelines for data collection and processing. Develop and maintain advanced data storage solutions. Ensure data quality and consistency through sophisticated validation and cleansing processes. Implement advanced data transformation techniques to prepare data for analysis. Collaborate with cross-functional teams to understand data requirements and provide innovative solutions. Optimize data engineering processes for performance, scalability, and reliability. Minimum Qualifications: Minimum of 5 years of relevant work experience and a Bachelor's degree or equivalent experience. Preferred Qualification: We are the Data Foundational Services (DFS) team within the Data Analytics and Intelligence Solutions (DAIS) organization. Our mission is to integrate data from PayPal and its brands into a unified data platform , enabling seamless data access for operational and analytical applications . We support critical business use cases, ensuring high-quality, scalable, and reliable data solutions across the enterprise. Minimum Qualifications: 8-10 years of relevant work experience and a Bachelor's degree or equivalent experience. Required Skills: Software Development Expertise – Strong proficiency in back-end (Java or Python) technologies, including building and maintaining scalable services. Having front-end (React, Angular, JavaScript, HTML, CSS) experience is an advantage. Data Engineering & Cloud Technologies – Experience working with databases (Oracle, MySQL, PostgreSQL) , Big Data technologies (Hadoop, Spark, Kafka, ElasticSearch/Solr) , and cloud platforms (AWS, GCP, Azure) . Scalability, Performance & Security – Deep understanding of designing systems for high availability, performance, and security , preferably in regulated industries like financial services. Subsidiary: PayPal Travel Percent: 0 PayPal does not charge candidates any fees for courses, applications, resume reviews, interviews, background checks, or onboarding. Any such request is a red flag and likely part of a scam. To learn more about how to identify and avoid recruitment fraud please visit https://careers.pypl.com/contact-us . For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com . Who We Are: Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com . Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Key Responsibilities Design, develop and test robust and scalable backend systems using Python. Develop and integrate RESTful APIs and third-party services. Write clean, maintainable, and well-documented code. Collaborate with front-end developers, designers, and product managers to implement features. Optimize application performance and resolve bugs and issues. Perform unit and integration testing to ensure software quality. Participate in code reviews and mentor junior developers if needed. Maintain and enhance existing applications. Requirements: 5+ years of professional experience in Python development. Strong understanding of core Python concepts, OOPs Experience with at least one web framework like Django, Flask, or FastAPI. Familiarity with RESTful API development and integration. Proficiency in working with relational and NoSQL databases (e.g., MySQL, MongoDB) and ORMs (e.g., SQLAlchemy, Django ORM). Good knowledge of unit testing frameworks like unittest, pytest. Experience with version control systems like Git. Experience with microservices architecture. Understanding of application security best practices (e.g., SQL injection prevention, secure API development). Exposure to message brokers like RabbitMQ, Kafka, or Celery. Experience with ELK stack (Elasticsearch, Logstash, Kibana) for logging and monitoring. Basic knowledge of Docker and containerization concepts Understanding of CI/CD pipelines and tools like Jenkins, GitLab CI, or GitHub Actions Preferred Skills Basic understanding of software design patterns (e.g., Singleton, Factory, Strategy) Exposure to AI and automation tools such as Flowise and n8n

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Data Engineer Experience: 2 - 4 Years Exp Salary: Competitive Preferred Notice Period : 30 Days Shift : 9:00 AM to 6:00 PM IST Opportunity Type: Office (Gurugram) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python and Airflow and Elasticsearch Trademo (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description What will you be doing here? Responsible for the maintenance and growth of a 50TB+ data pipeline serving global SaaS products for businesses, including onboarding new data and collaborating with pre-sales to articulate technical solutions Solves complex problems across large datasets by applying algorithms, particularly within the domains of Natural Language Processing (NLP) and Large Language Models (LLM) Leverage bleeding-edge technology to work with large volumes of complex data Be hands-on in development - Python, Pandas, NumPy, ETL frameworks. Preferred exposure to distributed computing frameworks like Apache Spark , Apache Kafka, Apache Airflow, Elasticsearch Along with individual data engineering contributions, actively help peers and junior team members on architecture and code to ensure the development of scalable, accurate, and highly available solutions Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirements: B-Tech/M-Tech in Computer Science from IIT or equivalent Tier 1 Colleges. 3+ years of relevant work experience in data engineering or related roles. Proven ability to efficiently work with a high variety and volume of data (50TB+ pipeline experience is a plus). Solid understanding and preferred exposure to NoSQL databases, including Elasticsearch, MongoDB, and GraphDB. Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, IBM , etc.). Exposure to core data engineering concepts and tools: Data warehousing, ETL processes, SQL, and NoSQL databases. Great problem-solving ability over a larger set of data and the ability to apply algorithms, with a plus for experience using NLP and LLM. Willingness to learn and apply new techniques and technologies to extract intelligence from data, with prior exposure to Machine Learning and NLP being a significant advantage. Sound understanding of Algorithms and Data Structures. Ability to write well-crafted, readable, testable, maintainable, and modular code. Desired Profile: A hard-working, humble disposition. Desire to make a strong impact on the lives of millions through your work. Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision What we offer: At Trademo, we want our employees to be comfortable with their benefits so they focus on doing the work they love. Parental leave - Maternity and Paternity Health Insurance Flexible Time Offs Stock Options How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Trademo is a Global Supply Chain Intelligence SaaS Company, headquartered in Palo-Alto, US. Trademo collects public and private data on global trade transactions, sanctioned parties, trade tariffs, ESG and other events using its proprietary algorithms. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

As a Senior Software DevOps Engineer, you will lead the design,implementation, and evolution of telemetry pipelines and DevOps automation that enable next-generation observability for distributed systems. You will blend a deep understanding of Open Telemetry architecture with strong DevOps practices to build a reliable, high-performance and self-service observability platform across hybrid cloud environments (AWS & Azure). Your mission: empower engineering teams with actionable insights through rich metrics, logs, and traces, while championing automation and innovation at every layer. WHAT YOU WILL BE DOING Observability Strategy & Implementation Architect and manage scalable observability solutions using OpenTelemetry (OTel),encompassing: Collectors: Design and deploy OTel Collectors (agent/gateway modes) for ingesting and exporting telemetry across services Instrumentation: Guide teams on auto/manual instrumentation for services (metrics, traces, and logs) Export Pipelines: Build telemetry pipelines to route data to backends like Grafana, Prometheus, Loki, New Relic, and Azure Monitor Processors & Extensions: Leverage OTel processors (batching, filtering, resource detection) and extensions for advanced enrichment and routing. DevOps Automation & Platform Reliability Own the CI/CD experience using GitLab Pipelines, integrating infrastructure automation with Terraform, Docker, and scripting in Bash and Python Build resilient and reusable infrastructure-as-code modules across AWS and Azure ecosystems.Manage containerized workloads, registries, secrets, and secure cloud-native deployments with best practices Cloud-Native Enablement Develop observability blueprints for cloud-native apps across AWS (ECS, EC2, VPC,IAM, CloudWatch) and Azure (AKS, App Services, Monitor) Optimize cost and performance of telemetry pipelines while ensuring SLA/SLO adherence for observability services Monitoring, Dashboards, and Alerting Build and maintain intuitive, role-based dashboards in Grafana ,New Relic..., enabling real-time visibility into service health, business KPIs, and SLOs. Implement alerting best practices (noise reduction, deduplication, alert grouping)integrated with incident management systems Innovation & Technical Leadership Drive cross-team observability initiatives that reduce MTTR and elevate engineering velocity Champion innovation projects—including self-service observability onboarding, log/metric reduction strategies, AI-assisted root cause detection, and more Mentor engineering teams on instrumentation, telemetry standards, and operational excellence WHAT YOU BRING 6+years of experience in DevOps, Site Reliability Engineering, or Observability roles Deep expertise with OpenTelemetry, including Collector configurations, receivers/exporters (OTLP, HTTP, Prometheus, Loki), and semantic conventions Proficient in GitLab CI/CD, Terraform, Docker, and scripting (Python, Bash, Go). Strong hands-on experience with AWS and Azure services, cloud automation, and cost optimization Proficiency with observability backends: Grafana, New Relic, Prometheus, Loki, or equivalent APM/log platforms Passion for building automated, resilient, and scalable telemetry pipelines Excellent documentation and communication skills to drive adoption and influence engineering culture Nice to Have) Certifications in AWS, Azure, or Terraform Experience with OpenTelemetry SDKs in Go, Java, or Node.js Familiarity with SLO management, error budgets, and observability-as-code approaches Exposure to event streaming (Kafka,rabbitmq), Elasticsearch ,Vault,consul

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results. Preferred Education Master's Degree Required Technical And Professional Expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred Technical And Professional Experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 weeks ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Zeta Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015. Our f lagship processing platform - Zeta Tachyon - is the industry’s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 15M+ cards have been issued on our platform globally. Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over 1700+employees - with over 70%roles in R&D - across locations in the US,EMEA, and Asia. We raised$280 million at a$1.5 billion valuation from Softbank, Mastercard, and other investors in 2021.Learn more @ www.zeta.tech , careers.zeta.tech , Linkedin , Twitter The Role As part of the Risk & Compliance team within the Engineering division at Zeta, the Application Security Manager is tasked with safeguarding all mobile, web applications, and APIs. This involves identifying vulnerabilities through testing and ethical hacking, while also educating developers and DevOps teams on how to resolve them. Your primary goal will be to ensure the security of Zeta's applications and platforms. As a manager, you'llbe responsible for securing all of Zeta’s products. In this individual contributor role, you will report directly to the Chief Information Security Officer (CISO). The role involves ensuring the security of web and mobile applications, APIs, and infrastructure by conducting regular VAPT. It requires providing expert guidance to developers on how to address and fix security vulnerabilities, along with performing code reviews to identify potential security issues. The role also includes actively participating in application design discussions to ensure security is integrated from the beginning and leading Threat Modeling exercises to identify potential threats. Additionally, the profile focuses on developing and promoting secure coding practices, educating developers and QA engineers on security standards for secure coding, data handling, network security, and encryption. The role also entails evaluating and integrating security testing tools like SAST, DAST, and SCA into the CI/CD pipeline to enhance continuous security integration. Responsibilities Guide Security and Privacy Initiatives: Actively participate in design reviews and threat modeling sessions to help shape the security and privacy approach for technology projects, ensuring security is embedded at all stages of application development. Ensure Secure Application Development: Collaborate with developers and product managers to ensure that applications are securely developed, hardened, and aligned with industry best practices. Project Scope Management: Define the scope for security initiatives, ensuring continuous adherence throughout each project phase, from initiation to sustenance/maintenance. Drive Internal Adoption and Visibility: Ensure that security projects are well-understood and adopted by internal stakeholders, fostering a culture of security awareness within the organization. Security Engineering Expertise: Serve as a technical expert and security champion within Zeta, providing guidance and expertise on security best practices across the organization. Team Leadership and Development Make decisions on hiring and lead the hiring process to build a skilled security team. Define and drive improvements in the hiring process to attract top security talent. Mentor and guide developers and QA teams on secure coding practices and security awareness. Security Tool and Gap Assessment: Continuously assess and recommend tools to address gaps in application security, ensuring the team is equipped with the best resources to identify and address vulnerabilities. Stakeholder Liaison: Collaborate with both internal and external stakeholders to ensure alignment on security requirements and deliverables, acting as the main point of contact for all security-related matters within the team. Bug Bounty Program Management: Evaluate and triage security bugs reported through the Bug Bounty program, working with relevant teams to address and resolve issues effectively. Own Security Posture: Take ownership of the security posture of various applications across the business units, ensuring that security best practices are consistently applied and maintained. Skills Hands-on experience in Vulnerability Assessment (VA) and Penetration Testing (PT) across web, mobile, API, and network/Infra environments. Deep understanding of the OWASP Top 10 and their respective attack and defense mechanisms. Strong exposure to Secure SDLC activities, Threat Modeling, and Secure Coding practices. Experience with both commercial and open-source security tools, including Burp Suite, AppScan, OWASP ZAP, BEEF, Metasploit, Qualys, Nipper, Nessus andSnyk. Expertise in identifying and exploiting business logic vulnerabilities. Solid understanding of cryptography, PKI-based systems, and TLS protocols. Proficiency in various AuthN/AuthZ frameworks (OIDC, OAuth, SAML) and the ability to read, write, and understand Java code. Experience with Static Analysis and Code Reviews using tools like Snyk,Fortify,Veracode, Checkmarx, and SonarQube. Hands-on experience in reverse engineering mobile apps and using tools like Dex2jar, ADB, Drozer, Clang, iMAS, and Frida/Objection for dynamic instrumentation. Experience conducting penetration tests and security assessments on internal/external networks, Windows/Linux environments, and cloud infrastructure (primarily AWS). Ability to identify and exploit security vulnerabilities and misconfigurations in Windows and Linux servers. Proficiency in shell scripting and automating tasks with tools such as Python or Ruby. Familiarity with PA-DSS, PCI SSF (S3, SSLC), and other security standards like PCI DSS, DPSC, ASVS and NIST. Understanding of Java frameworks like Spring Boot, CI/CD processes, and tools like Jenkins & Bitrise. In-depth knowledge of cloud infrastructure (AWS, Azure), including VPC/VNet, S3 buckets, IAM,Security Groups, blob stores, Load Balancers, Docker containers, and Kubernetes. Solid understanding of agile development practices. Active participation in bug bounty programs (HackerOne, Bug Crowd, etc.) and experience with hackathons and Capture the Flag (CTF) competitions. Knowledge of AWS/Azure services, including network configuration and security management. Experience with databases (PostgreSQL, Redshift, MySQL) and other data storage solutions like Elasticsearch and S3 buckets. Preferred Certifications: OSCP, OSWE, GWAPT, AWAE, AWS Certified Security Specialist, CompTIA Security+ Experience And Qualifications 12 to 18 years of overall experience in application security, with a strong background in identifying and mitigating vulnerabilities in software applications. A background in development and experience in the fintech sector is a plus. Bachelor of Technology (BE/ B.Tech ), M.Tech , or ME in Computer Science or an equivalent degree from an Engineering college/University. Life At Zeta At Zeta, we want you to grow to be the best version of yourself by unlocking the great potential that lies within you. This is why our core philosophy is ‘People Must Grow.’ We recognize your aspirations; act as enablers by bringing you the right opportunities, and let you grow as you chase disruptive goals. is adventurous and exhilarating at the same time. You get to work with some of the best minds in the industry and experience a culture that values the diversity of thoughts. If you want to push boundaries, learn continuously and grow to be the best version of yourself, Zeta is the place to be! Explore the life at zeta Zeta is an equal opportunity employer. At Zeta, we are committed to equal employment opportunities regardless of job history, disability, gender identity, religion, race, marital/parental status, or another special status. We are proud to be an equitable workplace that welcomes individuals from all walks of life if they fit the roles and responsibilities.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Data Engineer Experience: 2 - 4 Years Exp Salary: Competitive Preferred Notice Period : 30 Days Shift : 9:00 AM to 6:00 PM IST Opportunity Type: Office (Gurugram) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python and Airflow and Elasticsearch Trademo (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description What will you be doing here? Responsible for the maintenance and growth of a 50TB+ data pipeline serving global SaaS products for businesses, including onboarding new data and collaborating with pre-sales to articulate technical solutions Solves complex problems across large datasets by applying algorithms, particularly within the domains of Natural Language Processing (NLP) and Large Language Models (LLM) Leverage bleeding-edge technology to work with large volumes of complex data Be hands-on in development - Python, Pandas, NumPy, ETL frameworks. Preferred exposure to distributed computing frameworks like Apache Spark , Apache Kafka, Apache Airflow, Elasticsearch Along with individual data engineering contributions, actively help peers and junior team members on architecture and code to ensure the development of scalable, accurate, and highly available solutions Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirements: B-Tech/M-Tech in Computer Science from IIT or equivalent Tier 1 Colleges. 3+ years of relevant work experience in data engineering or related roles. Proven ability to efficiently work with a high variety and volume of data (50TB+ pipeline experience is a plus). Solid understanding and preferred exposure to NoSQL databases, including Elasticsearch, MongoDB, and GraphDB. Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, IBM , etc.). Exposure to core data engineering concepts and tools: Data warehousing, ETL processes, SQL, and NoSQL databases. Great problem-solving ability over a larger set of data and the ability to apply algorithms, with a plus for experience using NLP and LLM. Willingness to learn and apply new techniques and technologies to extract intelligence from data, with prior exposure to Machine Learning and NLP being a significant advantage. Sound understanding of Algorithms and Data Structures. Ability to write well-crafted, readable, testable, maintainable, and modular code. Desired Profile: A hard-working, humble disposition. Desire to make a strong impact on the lives of millions through your work. Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision What we offer: At Trademo, we want our employees to be comfortable with their benefits so they focus on doing the work they love. Parental leave - Maternity and Paternity Health Insurance Flexible Time Offs Stock Options How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Trademo is a Global Supply Chain Intelligence SaaS Company, headquartered in Palo-Alto, US. Trademo collects public and private data on global trade transactions, sanctioned parties, trade tariffs, ESG and other events using its proprietary algorithms. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies