Jobs
Interviews

3072 Elasticsearch Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

2 - 12 Lacs

mohali

On-site

About the Role We are looking for a DevOps Engineer with hands-on experience in AWS cloud, e-commerce platforms, and modern application stacks. You will support development and operations teams by ensuring reliable deployments, automation, monitoring, and scalable infrastructure for Next.js, .NET APIs, and Adobe Commerce Enterprise and Magento Community applications. Key Responsibilities Deploy and manage AWS services (ECS Fargate, EC2, S3, RDS, CloudFront, Route53). Build and maintain CI/CD pipelines for Next.js, .NET Core, Adobe Commerce and Magento. Containerize applications using Docker and manage orchestration. Configure and maintain caching systems (Redis, Varnish) and search engines (Elasticsearch/OpenSearch) for Magento. Monitor systems and set up alerts with CloudWatch, Grafana, NewRelic etc. Automate infrastructure tasks using Terraform or CloudFormation. Support application deployments and troubleshoot production issues. Apply security best practices in IAM, Secrets Manager, and network security. Qualifications 3–5 years of experience in DevOps or Cloud Engineering. Strong knowledge of AWS core services. Practical experience with CI/CD (GitHub Actions/GitLab/Jenkins). Experience with Next.js deployment models (SSR/ISR). Familiarity with .NET APIs in cloud or container environments. Understanding of Magento Community Edition setup & scaling. Knowledge of Adobe Commerce Enterprise Edition CI/CD Pipelines. Good knowledge of databases (MySQL, PostgreSQL, SQL Server). Scripting experience (Bash, Python, or PowerShell). Soft Skills Strong problem-solving and troubleshooting mindset. Good collaboration with dev and QA teams. Willingness to learn advanced cloud and e-commerce practices Job Types: Full-time, Permanent Pay: ₹275,736.97 - ₹1,251,508.07 per year

Posted 3 days ago

Apply

2.0 years

3 - 4 Lacs

noida

On-site

We're looking for a highly skilled and motivated Senior MERN Full Stack Developer with minimum 2 years of experience to join our team at Help Study Abroad . You'll be responsible for designing, developing, and deploying scalable and robust web applications using the MERN stack (MongoDB, Express.js, React.js, Node.js). This role requires a strong understanding of microservices architecture, integrating advanced technologies like AI and Elastic search, and demonstrating proficiency in DevOps practices. Responsibilities: Develop and maintain backend microservices for user authentication (JWT), course management, and AI-powered recommendations. Design and implement efficient data storage solutions using MongoDB. Leverage Elasticsearch for advanced full-text search capabilities Optimize application performance through Redis caching for frequently accessed data. Build responsive and intuitive user interfaces using React.js, integrating seamlessly with backend APIs. Demonstrate strong understanding and practical application of state management (e.g., Redux, Zustand, React Context API) and client-side caching. Implement and manage CI/CD pipelines (e.g., GitHub Actions, Jenkins) for automated builds, tests, and deployments. Containerize applications using Docker and understand deployment strategies on Linux environments (e.g., PM2, Nginx). Propose and implement solutions for inter-service communication and data streaming using technologies like Apache Kafka. Contribute to the entire software development lifecycle, including conceptualizing, designing, developing, testing, and deploying. Write clean, well-documented, and maintainable code with a focus on best practices and error handling. Qualifications: Minimum 2 years experience as a MERN Stack Developer Strong proficiency in JavaScript/TypeScript, Next.js, Node.js, Express.js, React.js, and MongoDB. Experience with microservices architecture and API design. Hands-on experience with Elasticsearch and Redis. Familiarity with AI integration, specifically with services like Gemini AI. Proficiency in setting up and managing CI/CD pipelines. Experience with Docker for containerization. Understanding of Linux server deployment considerations. Conceptual understanding and practical application of message brokers like Apache Kafka. Solid understanding of state management libraries (e.g., Redux) and client-side caching strategies. Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a collaborative team. Working Days and Office Hours: Our Office Timing is Monday to Saturday 10Am to 6:30PM Why Join Us: If you are obsessed with building secure, scalable and robust backend solutions with keen eye for frontend design, UI and world class user experience then we are excited to hear from you. We offer fast pace work environment, and amazing career growth opportunities for passionate and dedicated candidates. Follow us to Keep Updated with Current and Upcoming Jobs: - https://www.linkedin.com/company/helpstudyabroad-dot-com/ - https://www.instagram.com/helpstudyabroad.co/ - https://www.youtube.com/@HelpStudyAbroad Job Types: Contractual / Temporary, Full-time Contract length: 3-6 months Pay: ₹25,000.00 - ₹35,000.00 per month Benefits: Health insurance Paid time off Work Location: In person

Posted 3 days ago

Apply

2.0 years

0 Lacs

karnataka, india

On-site

Join Our Team as a Software Development Engineer! Are you passionate about coding and excited to be part of the development of scalable cloud systems? If so, we want you on our Reference Master Application Development team in India. We are seeking an experienced professional who is highly motivated to become a Full Stack developer. We need someone with expertise in Java, Spring and REST, who enjoys exploring other areas like UI, Devops and what not. About You – Experience, Education, Skills, and Accomplishments: Bachelor’s Degree or equivalent in computer science, software engineering, or a related field. Minimum 2 years of experience in Java, Spring Boot and JUnit testing. Strong knowledge of code improvement and optimization . High motivation to self-learn or be an expert in any of the below technologies - Angular, Typescript, Groovy, Elasticsearch It Would Be Great If You Also Have: A keen interest in code review and analysis A quick learning ability to adapt to any programming language Experience with web UI and browser developer tools (AWS ECS, REST API, Angular, JavaScript, Docker, TypeScript, Groovy, Jenkins, Sonar ) Experience with Elasticsearch or NoSQL databases What Will You Be Doing in This Role? Be a sought-after full stack technologist who can analyze code and solve problems independently Deliver robust products Identifying and solving problematic areas with innovative solutions Reviewing the existing application codebase and recommending improvements. Write clean, efficient, and maintainable code in accordance with coding standards. Works closely with higher-level engineers to increase functional knowledge. Develops and applies understanding of software development lifecycle and delivery methodology. Suggests alternative methodologies or techniques to achieving desired results. Maintains awareness of technical advances. Manually tests and Junit tests all assigned applications. Participates as a team member on various engineering projects. Writes application technical documentation. Follows team policies, procedures, and work instructions. About the Team: The Reference Master Development team consists of multiple squads working on various initiatives, from application UI/API to data testing, enhancing data quality, and optimizing application performance for our customers. Hours of Work: Full-time 45 hours per week Hybrid working model At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.

Posted 3 days ago

Apply

8.0 years

0 Lacs

delhi, india

Remote

About Us: HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. About the Role: HighLevel is looking for a strong Engineering Manager to lead our Reporting Team -- a high-impact group responsible for building the platform that powers analytics, dashboards, and business insights for our users. This team’s mission is to build a robust, secure, and flexible system where business users and developers can create dashboards, widgets, and reports from any data source — including product data, custom business metrics, or third-party integrations. The ideal candidate will be equally comfortable leading engineers, managing delivery, shaping product strategy, and solving deeply technical challenges in a distributed, multi-database environment. Requirements: 8+ years of experience in software engineering, including 2+ years in engineering management Experience building analytics or reporting products, dashboarding frameworks, or embedded data platforms Strong understanding of data architecture, data modeling, and real-time/batch pipelines Familiarity with multiple data stores and query engines such as SQL, MongoDB, Firestore, ElasticSearch, Clickhouse, and Snowflake Comfort with event-driven systems using tools like GCP Pub/Sub, Cloud Tasks, and subscriptions Experience managing full-stack teams working with NestJS (backend) and Vue.js (frontend) Track record of building platforms that enable internal and external developers to build on top of a core system Excellent communication and cross-functional leadership skills Responsibilities: Manage and mentor a team of engineers, ensuring growth, motivation, and high performance Drive the architecture and delivery of a flexible reporting platform with customizable widgets, embeddable dashboards, and secure data pipelines Collaborate with product, design, and platform teams to define and deliver on reporting product goals with strong user impact Guide system design involving streaming and batch pipelines, data modeling, and dashboard generation across multiple data sources Oversee monitoring, alerting, and system health, ensuring robust observability for all reporting services Ensure performance, scalability, and governance of reporting features for hundreds of thousands of accounts Collaborate cross-functionally to integrate with internal and third-party systems Own team delivery, roadmap prioritization, sprint planning, and retrospectives Foster engineering best practices, including code quality, observability, testing, and CI/CD processes Bonus Points: Experience managing teams building self-serve analytics, embeddable dashboards, or data visualization frameworks Exposure to forecasting and anomaly detection features in reporting systems Familiarity with multi-tenant SaaS environments, access control, and data isolation Understanding of observability practices and cost/performance tradeoffs across analytical queries EEO Statement: At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions. #NJ1

Posted 4 days ago

Apply

5.0 years

0 Lacs

india

On-site

We’re Hiring: Data Architect at Neotas We’re looking for a Experienced Data Architect to design, scale, and optimize high-performance, enterprise-grade data systems. Must-Haves: Deep expertise in Relational & Non-Relational Databases (PostgreSQL, MongoDB, Elasticsearch, Graph DB) Strong experience with AWS cloud-native databases & services Proven track record in building ETL pipelines, Databricks, and Snowflake Proficiency in Python for data engineering and automation 5+ years of recent experience in an architectural role, driving scalable and secure database solutions Good-to-Have: Certifications in PostgreSQL, MongoDB, or Cloud Database Architecture Immediate Joiners Preferred If you're excited about challenging the norm, thrive in a dynamic environment, and want to be part of a team that's shaping the future, then Neotas is the place for you. Come be a part of our journey, where every day is an opportunity to create something extraordinary. Interested candidates can drop their CVs on pooja.mavely@neotas.com

Posted 4 days ago

Apply

8.0 years

0 Lacs

pune, maharashtra, india

On-site

Key Responsibilities Leadership & Mentoring Lead a team of Java developers, providing guidance, mentorship, and technical expertise. Facilitate effective communication across teams and stakeholders, ensuring alignment on project goals. Conduct code reviews, ensuring high-quality standards, and provide constructive feedback. Collaborate with Product Managers, Architects, and other stakeholders to define technical requirements. Design & Architecture Design and implement scalable, maintainable, and high-performance Java applications. Define and maintain application architecture, ensuring consistency and scalability. Lead architectural discussions and decisions, ensuring solutions meet business requirements and technical specifications. Development & Coding Write clean, efficient, and reusable Java code using best practices. Ensure that solutions adhere to coding standards and follow industry best practices for performance, security, and scalability. Develop RESTful APIs and integrate third-party services and applications. Leverage Java frameworks and tools such as Spring, Hibernate, and Maven to build applications. Continuous Improvement Drive continuous improvement in development processes, tools, and methodologies. Keep up to date with new technologies, frameworks, and tools in the Java ecosystem and evaluate their potential benefits. Promote DevOps practices and help implement automated testing and CI/CD pipelines. Problem Solving & Troubleshooting Analyze and troubleshoot issues in production environments. Optimize existing systems and resolve performance bottlenecks. Ensure that solutions are designed with reliability, maintainability, and extensibility in mind. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field (or equivalent work experience). 8+ years of experience in software development with a strong focus on Java and related technologies. Proven experience as a Tech Lead, Senior Developer, or Software Engineer in Java-based application development. Expertise in Java frameworks like Spring, Hibernate, and Spring Boot. Experience with microservices architecture and cloud platforms. Strong Experience in Kafka, RabbitMQ, Postgres. Strong knowledge of RESTful APIs, databases (SQL/NoSQL), and caching technologies (Redis, Memcached). Familiarity with tools such as Maven, Git, Docker, and Kubernetes. Experience with Agile development methodologies (Scrum/Kanban). Strong analytical and problem-solving skills, with a passion for delivering high-quality software solutions. Excellent communication and leadership skills, with the ability to mentor and collaborate with cross-functional teams. Skills: kafka,leadership,postgres,java,rabbitmq,devops,cloud platforms,kubernetes,sql/nosql databases,problem-solving,spring boot,spring,microservices,restful apis,agile methodologies,memcached,mentoring,git,docker,redis,problem solving,elasticsearch,mentorship,maven,nosql,hibernate,sql,aws

Posted 4 days ago

Apply

0.0 - 2.0 years

0 Lacs

coimbatore, tamil nadu

On-site

Job Summary: The Technical Lead is responsible for leading a team of engineers in the design, implementation, maintenance, and troubleshooting of Linux-based systems. This role requires a deep understanding of Linux systems, network architecture, and software development processes. The Technical Lead will drive innovation, ensure system stability, and lead the team in delivering high-quality infrastructure solutions that align with the organization's goals. Key Responsibilities: Leadership and Team Management: Lead and mentor a team of Linux engineers, providing technical guidance and fostering professional growth. Manage workload distribution, ensuring that projects and tasks are completed on time and within scope. Collaborate with cross-functional teams to align IT infrastructure with organizational objectives. SLA & ITIL, Inventory Management System Design and Implementation: Architect, deploy, and manage robust Linux-based environments, including servers, networking, and storage solutions. Ensure the scalability, reliability, and security of Linux systems. Oversee the automation of system deployment and management processes using tools such as Ansible, Puppet, or Chef. Database management, MySQL, MongoDB, Elasticsearch, Postgres Maintenance and Troubleshooting: Lead efforts in monitoring, maintaining, and optimizing system performance. Proactively identify potential issues and implement solutions to prevent system outages. Resolve complex technical problems escalated from the support team. Security and Compliance: Implement and maintain security best practices for Linux systems, including patch management, firewall configuration, and access controls. Ensure compliance with relevant industry standards and regulations (e.g., HIPAA, GDPR, PCI-DSS). Documentation and Reporting: Develop and maintain comprehensive documentation of systems, processes, and procedures. Prepare and present regular reports on system performance, incidents, and improvement initiatives to senior management. Continuous Improvement: Stay up-to-date with the latest Linux technologies, tools, and practices. Lead initiatives to improve the efficiency, reliability, and security of Linux environments. Drive innovation in infrastructure management, including the adoption of cloud technologies and containerization. Required Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent work experience. 10+ years of experience in Linux system administration, with at least 2 years in a leadership or senior technical role. Deep understanding of Linux operating systems (RHEL, CentOS, Ubuntu) and associated technologies. Strong knowledge of networking principles, including TCP/IP, DNS, and firewalls. Experience with automation and configuration management tools (e.g., Ansible, Puppet, Chef). Proficiency in scripting languages (e.g., Bash, Python). Experience with virtualization (e.g., VMware, KVM) and containerization (e.g., Docker, Kubernetes). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and hybrid cloud environments. Excellent problem-solving skills and the ability to work under pressure. Strong communication and interpersonal skills. Job Types: Full-time, Permanent Pay: Up to ₹100,000.00 per month Benefits: Health insurance Provident Fund Ability to commute/relocate: Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): This is a WFO, 6 day work Work Location: In person

Posted 4 days ago

Apply

4.0 years

0 Lacs

india

On-site

About the Role: We’re looking for a seasoned Software Engineer (SDE III) to join our Users Module team, where you'll design and build scalable, secure user management systems. This includes working on authentication, authorization, and complex permission hierarchies across a multi-tenant SaaS platform. You’ll collaborate cross-functionally to deliver reliable, performant, and secure experiences for millions of users. This role is ideal for someone who thrives in product-oriented backend systems, has a strong full stack foundation, and is comfortable using AI tools to enhance their development speed and effectiveness. Requirements: 4+ years in backend engineering, preferably building user management systems Proven work on authentication/authorization and IAM systems Experience with multi-tenant SaaS and enterprise user workflows & permissions Required Technical Skills: Languages: TypeScript, JavaScript, Node.js Frameworks: NestJS, Express.js Databases: MongoDB, Firestore, SQL Authentication: OAuth 2.0, JWT, session management Cloud: Google Cloud Platform (GCP) Messaging: Google Cloud Pub/Sub DevOps: Git, CI/CD Search: Elasticsearch, Clickhouse Architecture: Microservices, Event driven architecture, IAM, RBAC Responsibilities: Write clean, maintainable code in TypeScript/JavaScript Build and maintain robust RESTful APIs for user management and authentication Design scalable multi-tenant user data models and schemas Implement OAuth, JWT, and secure session management Design and implement IAM v2 systems with RBAC and granular permissions Build features like user impersonation, delegation, and permission templates Create and manage token validation and session lifecycle flows Architect scalable models using MongoDB, Firestore, and SQL Integrate Elasticsearch for advanced user search and filtering Sync user data across multiple stores and implement audit logging Build secure auth flows with encryption, token validation, and data privacy controls Ensure compliance with GDPR and similar regulations Create audit trails and compliance reporting for user actions Optimize queries, implement caching strategies, and handle high-volume user operations Build systems that scale across enterprise-grade customer environments Partner with product, security, and platform teams to deliver features Collaborate with DevOps on deployment and infrastructure Mentor junior engineers and participate in architecture discussions and code reviews Bonus Points: Experience with Firebase Admin SDK Familiarity with ISTIO and service mesh architecture Familiarity with GCP services or willingness to get hands-on with them

Posted 4 days ago

Apply

1.0 years

0 Lacs

hyderabad, telangana, india

On-site

About the job About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: ● Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ● ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ● ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ● ‘The Greatest Brand in Education’ in a research-based listing by URS Media ● NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education. NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle Why NxtWave As a Fullstack SDE1 at NxtWave, you ● Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) ● Get to take ownership of the features you build and work closely with the product team ● Work in a great culture that continuously empowers you to grow in your career ● Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) ● NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly ● Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities ● Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. ● Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. ● Build reusable, maintainable frontend components using modern state management practices. ● Develop backend services in Node.js or Python, adhering to clean-architecture principles. ● Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. ● Containerize applications and configure CI/CD pipelines for automated builds and deployments. ● Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. ● Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. ● Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills ● 1+ years of experience building full-stack web applications. ● Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). ● Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. ● Hands-on with state management patterns (Redux, MobX, or custom solutions). ● Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). ● Expertise in designing REST and/or GraphQL APIs and integrating with backend services. ● Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). ● Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. ● Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. ● Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. ● Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: ● The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software ● Strong collaboration abilities and a flexible & friendly approach to working with teams ● Strong determination with a constant eye on solutions ● Creative ideas with problem solving mind-set ● Be open to receiving objective criticism and improving upon it ● Eagerness to learn and zeal to grow ● Strong communication skills is a huge plus Work Location: Hyderabad

Posted 4 days ago

Apply

0.0 - 12.0 years

0 Lacs

hinjewadi, pune, maharashtra

On-site

Hinjewadi, Pune, Maharashtra, India Department Delivery Unit 2 Job posted on Aug 20, 2025 Employee Type Permanent Experience range (Years) 8 years - 12 years Job Title: QA Specialist Experience Level: 8-12 Years Location : Pune (Hinjewadi) Areas of Responsibility Design and implement tests, debug and define corrective actions. Improve software development process by detecting and preventing errors in production. Estimate, prioritize, plan and coordinate quality testing activities. Review quality specifications and technical design documents to provide timely and meaningful feedback. Skills and Qualifications 4+ years testing/test automation experience along with basic testing of distributed applications 2+ years experience in automating test cases end-to-end for distributed applications Good understanding of distributed applications and the various points of failures in a distributed application Thorough knowledge of scalability limits of a given application considering the load and system characteristics. Good understanding of Python scripts along with any data handling frameworks such as Pandas Ability to troubleshoot web applications with Linux BE app servers or Linux Containers, including distributed applications like elastic search or spark clusters Good understanding of NLP LLMs such as Llama or Mistral LLMs is a plus [Trainable] Experience in testing a RAG chat application is a plus Experience with verification of a fine tuned LLM for accuracy and robustness of response is desirable Experience in writing functional, end-end, regression and integration tests Strong testing and automation skills: Core Java and Python programming (for test automation) Python frameworks - Pandas, Statistics, numPy Test automation experience using Java based testing automation framework such as Selenium Testing of distributed applications Experience in Test planning/Test Cases/Test Metrics Understanding of security testing Good understanding of performance testing Backend Testing automation Frontend Testing automation Ability to learn new technologies with minimal guidance Good to Have: Python, Python frameworks - numPy, Pandas, Statistics LLMs/AI - Llama, OpenAI, Mistral, LLM frameworks such as pyTorch, LangChain, LlamaIndex, spaCy, embedded vectors [Trainable] RAG Chat application testing experience PostgreSQL Big Data testing experience Testing experience on Spark/ElasticSearch based applications Experience in container based distributed applications Experience in testing applications in a cloud environment (AWS preferably). Oauth 2.0 Understanding of nuances of testing web based applications across different browser/platform and display form factors Experience in performance testing of web applications

Posted 4 days ago

Apply

2.0 years

0 Lacs

gurgaon, haryana, india

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Machine Learning Engineer II An established performer who consistently applies software an data engineering principles to business contexts, proactively identifies inefficiencies in existing systems, leads small, well-defined projects and supports complex projects. Experience Who you are: 2+ years for Bachelor's 0-2 years for Master's Developed software in a team environment of at least 5 engineers (agile, version control, etc.). Built and maintained a software project/product in production environments in public/hybrid cloud infrastructure. Has knowledge of or has worked in products using streaming technologies. Bachelor's Or Master's Or Equivalent Experience. Technical Degree Preferred. Demonstrates strength in a language, moderate familiarity in other applicable languages (Scala, Java, etc.). Is familiar with more than one data stores or API access patterns and integration such as Kafka, Hive, ElasticSearch, Redis, PostgreSQL, etc. Is familiar with associated technologies within their specialization as part of the wider ecosystem. Understands how to build REST endpoints and has knowledge of Python web services. Knows how to build, publish and deploy Docker images. Understands how projects fit together within their team. Understands moderately complex systems. Tests and monitors their own code. Understands testing and monitoring tools. Debugs applications. Is familiar with creating Airflow DAGs. Is familiar with Kafka and how to produce or consume messages on EGSP from an application such as Python, Kstreams, etc. Is familiar with advanced machine learning libraries such as PyTorch and Tensorflow/Keras. Is familiar with a variety of common packages in various domains (NLP, vision, forecasting) such as spaCy, OpenCV, CausalML, Prophet, etc. Is familiar with hyperparameter tuning. Has an intuition for different approaches for model training, parameters to use, optimization approach, etc. Is familiar with basic model performance evaluation metrics such as how to interpret a RoC curve. Is familiar with the bias-variance tradeoff. Understands how team goals fit a business need. What You Will Do Collaborates with team members to co-develop and solve problems. Proactively reaches out to meet peers across environment and collaborate to solve problems. Takes advantage of opportunities to build new technical expertise in a specific engineering area. Seeks knowledge from subject matter experts when needed. Understands the importance of system and technology integration and the basic features and facilities involved in the integration process. Develops and tests standard software applications and related programs and procedures to ensure they meet design requirements. Applies software design principles, data structures and/or design patterns and computer science fundamentals to write code that is clean, maintainable, optimized, and modular with good naming conventions. Applies knowledge of database design to solve data requirements. Develops data applications using distributed ETL frameworks. Helps coordinate stakeholder input and collaboration efforts when developing solutions to issues. Thinks broadly and understand how, why and when policies/processes are standardized and when they differ across the organization. Executes tasks and/or provides data to support implementation of holistic solutions that forge linkages between structure, people, process and technology. Applies formal training methods to current workload. Feels comfortable challenging authority/the status quo. Maintains awareness of advancements, innovations and changes in technologies (specifically relating to machine learning engineering, ML platforms, DS models, enterprise information management, business intelligence and data science). Reports clearly on current work status. Asks challenging questions when empowered to do so. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 4 days ago

Apply

2.0 - 4.0 years

0 Lacs

pune/pimpri-chinchwad area

On-site

Why it’s worth it: Are you a passionate Software Engineer who has a proven track record of solving complex problems and being at the forefront of innovation? Pursuing a career at ReliaQuest will allow you to write code and manipulate data in ways that have never been done before, driving automation of threat detection and response for one of the world’s fastest growing industries. You will lead the creation, testing, and deployment of cutting-edge security technology to enterprise customers across the globe. Above all else, this role will allow you to work and learn from some of the most talented people in the business as well as have a direct contribution to the growth and success of RQ. The everyday hustle: Research and develop creative solutions across a wide range of cutting-edge technologies to continuously evolve our platform, GreyMatter. Create REST API’s and integrations between various products to improve and automate our customer’s threat detection. Manage the continuous integration and deployment processes of complex technologies. Perform code reviews to ensure consistent improvement. Proactively automate and improve all stages of the software development lifecycle. Interface closely with various parts of the business, both internally and externally, to ensure all users are leveraging the product with ease and to its full potential. Provide support to other team members as well as cultivate a culture of constant collaboration. Do you have what it takes? 2-4 Years of Software Development experience in the following languages and/or technologies: Python, JS, React, Angular, Java, C#, MySQL, Elastic Search or equivalent. Must be proficient in the English language, both written and verbal What makes you uncommon? Hands on experience with one or more of the following technologies (Elasticsearch, Kafka, Apache Spark, Logstash, Hadoop/hive, Tensorflow, Kibana, Athena/Presto/BigTable, Angular, React). Experience with cloud platforms such as AWS, GCP, or Azure. Solid understanding of unit testing, continuous integration and deployment practices. Experience with Agile Methodology. Higher education/relevant certifications.

Posted 4 days ago

Apply

2.0 - 4.0 years

0 Lacs

pune/pimpri-chinchwad area

On-site

Why it’s worth it: Are you a passionate Software Engineer who has a proven track record of solving complex problems and being at the forefront of innovation? Pursuing a career at ReliaQuest will allow you to write code and manipulate data in ways that have never been done before, driving automation of threat detection and response for one of the world’s fastest growing industries. You will lead the creation, testing, and deployment of cutting-edge security technology to enterprise customers across the globe. Above all else, this role will allow you to work and learn from some of the most talented people in the business as well as have a direct contribution to the growth and success of RQ. The everyday hustle: Research and develop creative solutions across a wide range of cutting-edge technologies to continuously evolve our platform, GreyMatter. Create REST API’s and integrations between various products to improve and automate our customer’s threat detection. Manage the continuous integration and deployment processes of complex technologies. Perform code reviews to ensure consistent improvement. Proactively automate and improve all stages of the software development lifecycle. Interface closely with various parts of the business, both internally and externally, to ensure all users are leveraging the product with ease and to its full potential. Provide support to other team members as well as cultivate a culture of constant collaboration. Do you have what it takes? 2-4 Years of Software Development experience in the following languages and/or technologies: Python, JS, React, Angular, Java, C#, MySQL, Elastic Search or equivalent. Must be proficient in the English language, both written and verbal What makes you uncommon? Hands on experience with one or more of the following technologies (Elasticsearch, Kafka, Apache Spark, Logstash, Hadoop/hive, Tensorflow, Kibana, Athena/Presto/BigTable, Angular, React). Experience with cloud platforms such as AWS, GCP, or Azure. Solid understanding of unit testing, continuous integration and deployment practices. Experience with Agile Methodology. Higher education/relevant certifications.

Posted 4 days ago

Apply

8.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Job Description Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionises customer engagement by transforming contact centres into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organisations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Position Overview We seek an experienced Staff Software Engineer to lead the design and development of our data warehouse and analytics platform in addition to helping raise the engineering bar for the entire technology stack at Level AI, including applications, platform, and infrastructure. They will actively collaborate with team members and the wider Level AI engineering community to develop highly scalable and performant systems. They will be a technical thought leader who will help drive solving complex problems of today and the future by designing and building simple and elegant technical solutions. They will coach and mentor junior engineers and drive engineering best practices. They will actively collaborate with product managers and other stakeholders both inside and outside the team. What youll get to do at Level AI (and more as we grow together) : Design, develop, and evolve data pipelines that ingest and process high-volume data from multiple external and internal sources. Build scalable, fault-tolerant architectures for both batch and real-time data workflows using tools like GCP Pub/Sub, Kafka and Celery. Define and maintain robust data models with a focus on domain-oriented design, supporting both operational and analytical workloads. Architect and implement data lake/warehouse solutions using Postgres and Snowflake. Lead the design and deployment of workflow orchestration using Apache Airflow for end-to-end pipeline automation. Ensure platform reliability with strong monitoring, alerting, and observability for all data services and pipelines. Collaborate closely with Other internal product & engineering teams to align data platform capabilities with product and business needs. Own and enforce data quality, schema evolution, data contract practices, and governance standards. Provide technical leadership, mentor junior engineers, and contribute to cross-functional architectural love to explore more about you if you have : 8+ years of experience building large-scale data systems; preferably in high-ingestion, multi-source environments. Strong system design, debugging, and performance tuning skills. Strong programming skills in Python and Java. Deep understanding of SQL (Postgres, MySQL) and data modeling (star/snowflake schema, Hands-on experience with streaming platforms like Kafka and GCP Pub/Sub. Expertise with Airflow or similar orchestration frameworks. Solid experience with Snowflake, Postgres, and distributed storage design. Familiarity with Celery for asynchronous task processing. Comfortable working with ElasticSearch for data indexing and querying. Exposure to Redash, Metabase, or similar BI/analytics tools. Proven experience deploying solutions on cloud platforms like GCP or Attributes : Experience with data governance and lineage tools. Demonstrated ability to handle scale, reliability, and incident response in data systems. Excellent communication and stakeholder management skills. Passion for mentoring and growing engineering talent. (ref:hirist.tech)

Posted 4 days ago

Apply

0 years

0 Lacs

greater kolkata area

Remote

Job Description About The Job In this role you will be challenged to apply your skills and knowledge to provide direct support for critical environments and help ensure the stability of services by performing dedicated support activities, engaging in automation activities, issue remediation, and performing root cause analysis (RCA) to maximize uptime and reduce costs for enterprise customers. You will consistently rely on your knowledge of production support and your background in dev-ops processes, including incident/change/problem management, call triaging, and critical issue resolution procedures. You will collaborate with our customers, internal teams, architects, and Site Reliability Engineering teams to troubleshoot complex and challenging issues. This position extends well beyond product support as you will be a trusted partner to our enterprise customers by offering developer-to-developer assistance for cloud-native service development and architecture; you will work closely with junior and senior engineers and developers, fixing and improving code and identifying automation opportunities. Youll partner closely with development engineering teams and work alongside some of the brightest developers in the open source community. You will also work with Red Hat’s key strategic cloud partners to jointly solve customer Multi-Cloud issues. What will you do? Support Red Hat’s customers using our OpenShift Ecosystem running on different Hybrid Cloud solutions (public providers including Azure, AWS, Google and also on premise deployments) Actively support and provide guidance to other Red Hat engineers dealing with more complex issues Resolve Red Hat’s customers issues across different components such as Logging (Elasticsearch, FluentD, Kibana), Monitoring, Networking (SDN / OVN kubernetes), Storage, Service Mesh/Istio Be a regional / global subject-matter expert on one of the above-mentioned OpenShift components Diagnose and troubleshoot complex technical problems, often going through product code, to identify the root cause Constantly work on your skills so you can keep up with the latest technology trends as well as new products and features in Red Hat’s portfolio Build on your expertise by going through intensive onboarding, which includes taking RHEL and OpenShift Certifications, such as Red Hat Certified Engineer (RHCE), Containers & Kubernetes (DO180)/Operating a Production Kubernetes Cluster (DO280), Scaling Kubernetes Deployments in the Enterprise (DO380) Consult the customer cases with in-house support engineers and developers to promote creative solutions and improve customer experience Contribute to the global Red Hat knowledge management system As a part of your professional development, you can collaborate with your colleagues on different projects outside your day-to-day work, contribute to the success of the global team and / or contribute to Open-source community and product code What will you bring? Excellent communication skills, ability to analyze and effectively resolve customers issues Previous experience configuring and troubleshooting OpenShift or Kubernetes and one of its components such as (Logging, Monitoring, Networking, Storage, Service Mesh) Previous professional Linux system administration experience Passion for problem-solving and investigation of the customer and technical problems Ability to work with conflicting priorities, take initiative, and maintain a customer-centric focus Team player attitude, open-minded and collaborative approach Ability to read code in one of the following languages (Python / C / C++ / GoLang, Java), interest in deepening the knowledge of GoLang Practical experience with developer tools for code management (git, GitHub, GitLab) and bug tracking systems (Bugzilla, Jira etc) Thirst for knowledge, ability to further grow and enhance your technical/interpersonal skills About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. Were a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. Job Details Role Level: Not Applicable Work Type: Full-Time Country: India City: india Company Website: http://www.redhat.com Job Function: Engineering Company Industry/ Sector: Software Development and IT Services and IT Consulting What We Offer About The Company Searching, interviewing and hiring are all part of the professional life. The TALENTMATE Portal idea is to fill and help professionals doing one of them by bringing together the requisites under One Roof. Whether you're hunting for your Next Job Opportunity or Looking for Potential Employers, we're here to lend you a Helping Hand. Report Similar Jobs CRM Contractors Talentmate Software Engineer Talentmate Application Support Specialist Talentmate Data Engineer Databricks Specialist Talentmate Principal Software Quality Engineer - Secure Supply Chain Talentmate Principal Software Quality Engineer - Secure Supply Chain Talentmate Disclaimer: talentmate.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@talentmate.com.

Posted 4 days ago

Apply

0 years

0 Lacs

india

Remote

About The Job In this role you will be challenged to apply your skills and knowledge to provide direct support for critical environments and help ensure the stability of services by performing dedicated support activities, engaging in automation activities, issue remediation, and performing root cause analysis (RCA) to maximize uptime and reduce costs for enterprise customers. You will consistently rely on your knowledge of production support and your background in dev-ops processes, including incident/change/problem management, call triaging, and critical issue resolution procedures. You will collaborate with our customers, internal teams, architects, and Site Reliability Engineering teams to troubleshoot complex and challenging issues. This position extends well beyond product support as you will be a trusted partner to our enterprise customers by offering developer-to-developer assistance for cloud-native service development and architecture; you will work closely with junior and senior engineers and developers, fixing and improving code and identifying automation opportunities. You'll partner closely with development engineering teams and work alongside some of the brightest developers in the open source community. You will also work with Red Hat’s key strategic cloud partners to jointly solve customer Multi-Cloud issues. What will you do? Support Red Hat’s customers using our OpenShift Ecosystem running on different Hybrid Cloud solutions (public providers including Azure, AWS, Google and also on premise deployments) Actively support and provide guidance to other Red Hat engineers dealing with more complex issues Resolve Red Hat’s customers issues across different components such as Logging (Elasticsearch, FluentD, Kibana), Monitoring, Networking (SDN / OVN kubernetes), Storage, Service Mesh/Istio Be a regional / global subject-matter expert on one of the above-mentioned OpenShift components Diagnose and troubleshoot complex technical problems, often going through product code, to identify the root cause Constantly work on your skills so you can keep up with the latest technology trends as well as new products and features in Red Hat’s portfolio Build on your expertise by going through intensive onboarding, which includes taking RHEL and OpenShift Certifications, such as Red Hat Certified Engineer (RHCE), Containers & Kubernetes (DO180)/Operating a Production Kubernetes Cluster (DO280), Scaling Kubernetes Deployments in the Enterprise (DO380) Consult the customer cases with in-house support engineers and developers to promote creative solutions and improve customer experience Contribute to the global Red Hat knowledge management system As a part of your professional development, you can collaborate with your colleagues on different projects outside your day-to-day work, contribute to the success of the global team and / or contribute to Open-source community and product code What will you bring? Excellent communication skills, ability to analyze and effectively resolve customer's issues Previous experience configuring and troubleshooting OpenShift or Kubernetes and one of its components such as (Logging, Monitoring, Networking, Storage, Service Mesh) Previous professional Linux system administration experience Passion for problem-solving and investigation of the customer and technical problems Ability to work with conflicting priorities, take initiative, and maintain a customer-centric focus Team player attitude, open-minded and collaborative approach Ability to read code in one of the following languages (Python / C / C++ / GoLang, Java), interest in deepening the knowledge of GoLang Practical experience with developer tools for code management (git, GitHub, GitLab) and bug tracking systems (Bugzilla, Jira etc) Thirst for knowledge, ability to further grow and enhance your technical/interpersonal skills About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply.

Posted 4 days ago

Apply

2.0 years

0 Lacs

gurugram, haryana, india

On-site

Company Description About Holiday Tribe Holiday Tribe is a Great Place To Work® Certified™, seed-stage VC-funded travel-tech brand based in Gurugram. We specialize in crafting unforgettable leisure travel experiences by integrating advanced technology, leveraging human expertise, and prioritizing customer success. With holidays curated across 30+ destinations worldwide, partnerships with renowned tourism boards, and recognition as the Emerging Holiday Tech Company at the India Travel Awards 2023, Holiday Tribe is transforming the travel industry. Our mission is to redefine how Indians experience holidays—making travel planning faster, smarter, and more personalized, ensuring every trip is truly seamless and unforgettable. Role Description Design, deploy, and manage scalable infrastructure on AWS and/or GCP for production and staging environments. Manage Kubernetes clusters for container orchestration, ensuring high availability and performance of services. Build and maintain CI/CD pipelines using Jenkins or GitHub Actions to streamline deployments. Implement observability stacks including Prometheus, Grafana, New Relic, and ELK (Elasticsearch, Logstash, Kibana) for proactive monitoring and logging. Collaborate with engineering teams to ensure secure, fast, and cost-efficient infrastructure for backend services and AI orchestration. Own incident response workflows, setup alerts, and help improve system reliability and root cause analysis practices. Automate common operational tasks and contribute to internal tooling to reduce toil. Qualifications 2+ years of hands-on DevOps experience, preferably in a product-based startup. Strong proficiency in cloud platforms (AWS/GCP) including VPCs, EC2/GKE, IAM, etc. Hands-on experience managing Kubernetes clusters in production. Proficiency in CI/CD systems, especially Jenkins (bonus if GitHub Actions or GitLab CI). Deep understanding of observability tools like Prometheus, Grafana, New Relic, and ELK Stack. Experience with scripting (Bash, Python) and infrastructure-as-code tools (Terraform, Helm is a plus). Good communication skills and ability to work closely with developers and product teams. Perks Help define backend architecture of a next-gen travel product. Work closely with a product and AI team solving real-world travel use cases. Freedom to experiment and grow as an engineer. Competitive compensation and equity. Why Join Us? Competitive salary and performance-based incentives. Opportunities for growth and career development in a rapidly expanding company. A dynamic and collaborative work environment with a focus on innovation and customer satisfaction.

Posted 4 days ago

Apply

0 years

0 Lacs

hyderabad, telangana, india

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Preferred Education Master's Degree Required Technical And Professional Expertise Develop/Convert the database (Hadoop to GCP) of the specific objects (tables, views, procedures, functions, triggers, etc.) from one database to another database platform Implementation of a specific Data Replication mechanism (CDC, file data transfer, bulk data transfer, etc.). Expose data as API Participation in modernization roadmap journey Analyze discovery and analysis outcomes Lead discovery and analysis workshops/playbacks Identification of the applications dependencies, source, and target database incompatibilities. Analyze the non-functional requirements (security, HA, RTO/RPO, storage, compute, network, performance bench, etc.). Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. . Leads the team to adopt right tools for various migration and modernization method Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 4 days ago

Apply

2.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Summary: The candidate will be responsible for analysis, coding and building Back-end stack. In this role, you will be developing and enabling new features on enhancing the Crypto platform Business needs. Day to Day Responsibilities: Hands-on development experience using Java, Spring Core, and Spring Batch. Strong understanding and application of object-oriented design and development principles. Solid analytical, debugging, and problem-solving skills. Experience in developing data-driven applications using RDBMS (Oracle, DB2, etc.), with strong data architecture, SQL development, and data modeling skills. Familiarity with scripting languages (Unix Shell, Perl, Python) is a plus. Knowledge of data modeling and experience with relational databases, ElasticSearch (Kibana), Hadoop. Proficiency in software design patterns, data structures, and algorithms. Experience with REST APIs, Web Services, JMS, and build tools. Exposure to application performance tuning and unit testing (including tools). Skilled in industry-standard development tools (IDE, Version Control, Continuous Integration). Strong communication skills with the ability to convey complex product requirements and technical concepts. Strong time management skills and ability to handle concurrent priorities. Ability to write efficient, reusable, and clean code without unnecessary complication. Proven ability to manage workload efficiently and resolve time-critical issues professionally Domain expertise in online payments and related areas is a plus. Ability to mentor junior engineers by providing technical and business guidance. Requirements: 2-8 yrs experience as Backend Java developer 2+ years of experience developing software, primarily with JAVA Required Skills: Strong skills in Java, Scala, Spark & Raptor and development. Strong skills in Spring Boot, Hibernate, REST, Maven, GitHub, and other open-source Java libraries. Excellent problem-solving abilities and strong understanding of software development/ delivery lifecycle. Proven track record working with real-world projects and delivering complex software projects from concept to production, with a focus on scalability, reliability, and performance. Good knowledge of data structures and algorithm concepts, as well as database, tuning and query optimization. Identify any product/functionality gaps and collaborate internal product and technology teams to define the necessary development to support solution delivery. Strong debugging and problem resolution skills and focus on automation, and test-driven development. Ability to work in a fast paced, iterative development environment. Top 3 Must have skills: Java and Spring Knowledge is must 5+ Years of Core Java Development Hands on Industry Tools like GitHub, Harness CI/CD, Junit, SQL/Oracle #AditiIndia # 25-21788

Posted 4 days ago

Apply

5.0 years

0 Lacs

delhi cantonment, delhi, india

On-site

Position Title: DevSecOps Engineer Position Type: Regular - Full-Time Position Location: New Delhi Grade: Grade 04 Requisition ID: 32005 Job Purpose Reporting to the Director, DevSecOps & SRE, the DevSecOps Engineer will be responsible for: Design, implement and monitor enterprise-grade secure fault-tolerant infrastructure. Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. In the role as a DevSecOps Engineer, we believe that you are bringing experience of Operations and Security using DevOps. Strong analytical and automation skills that enable you to deliver the expected benefits to the business and digital products. Building and deploying distributed applications and big data pipelines in the cloud brings you excitement. You will be working with GCP and AWS. Jenkins, Groovy scripting, Shell scripting, Terraform, Ansible or an equivalent are a wide array of tools that you have used in the past. This is an exciting opportunity to influence and build the DevSecOps framework for leading Manufacturing platform in Autonomous Buildings space, while working with the latest technologies on a cloud-based environment in a multi-disciplinary team with platform architects, tech leads, data scientists, data engineers, and insight specialists. Job Responsibilities Design, implement and monitor enterprise-grade secure fault-tolerant infrastructure Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. These best practices should support traceability & auditability of change. Ensure continuous availability of various DevOps tools supporting SCM & Release Management including Source Control, Containerization, Continuous Integration, & Change Management. (Jenkins, Docker, JIRA, SonarQube, Terraform, Google/Azure/AWS Cloud CLI). Implementing Build and release automated pipelines framework Implementing DevSecOps Tools and Quality Gates with SLO Implementing SAST, DAST, IAST, OSS tools in CICD Pipelines Implementing Automated change management policies in pieline from Dev-Prod. Work with cross-functional co-located teams in design, development and implementation of enterprise scalable features related to enabling higher developer productivity, environment monitoring and self-healing, and facilitate autonomous delivery teams. Build infrastructure automation tools and frameworks leveraging Docker, Kubernetes Will operate as a technical expert on DevOps infrastructure projects pertaining to Containerization, systems management, design and architecture. Perform performance analysis and optimization, monitoring and problem resolution, upgrade planning and execution, and process creation and documentation. Integrate newly developed and existing applications into private, public and hybrid cloud environments Automate deployment pipelines in a scalable, secure and reliable manner Leverage application monitoring tools to troubleshoot and diagnose environment issues Have a culture of automation where any repetitive work is automated Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. These best practices should support traceability & auditability of change. Working closely with Cloud Infrastructure and Security teams to ensure organizational best practices are followed Translating non-functional requirements of Development, Security, and Operations architectures into a design that can be implemented using the chosen set of software for the project. Ownership of technical design and implementation for one or more software stacks of the DevSecOps team. Design and implementation of the distributed code repository. Implementing automation pipelines to support code compilation, testing, and deployment into the software components of the entire solution. Integrating the monitoring of all software components in the entire solution, and data mining the data streams for actionable events to remediate issues. Implement configuration management pipelines to standardize environments. Integrate DevSecOps software with credentials management tools. Create non-functional test scenarios for verifying the DevSecOps software setup. Key Qualification & Experiences At least 5 years of relevant working experience in DevSecOps, Task Automation, or GitOps. Demonstrated proficiency in installation, configuration, or implementation in one or more of the following software. Jenkins, Azure DevOps, Bamboo, or software of similar capability. GitHub, GitLab, or software of similar capability. Jira, Asana, Trello, or software of similar capability. Ansible, Terraform, Chef Automate, or software of similar capability. Flux CD, or software of similar capability. Any test automation software. Any service virtualization software. Operating Software administration experience for Ubuntu, Debian, Alpine, RHEL. Technical documentation writing experience. DevOps Engineering certification for on-premises or public cloud is advantageous. Experience with work planning and effort estimation is an advantage. Strong problem solving and analytical skills. Strong interpersonal and written and verbal communication skills. Highly adaptable to changing circumstances. Interest in continuously learning new skills and technologies. Experience with programming and scripting languages (e.g. Java, C#, C++, Python, Bash, PowerShell). Experience with incident and response management. Experience with Agile and DevOps development methodologies. Experience with container technologies and supporting tools (e.g. Docker Swarm, Podman, Kubernetes, Mesos). Experience with working in cloud ecosystems (Microsoft Azure AWS, Google Cloud Platform,). Experience with configuration management systems (e.g. Puppet, Ansible, Chef, Salt, Terraform). Experience working with continuous integration/continuous deployment tools (e.g. Git, Teamcity, Jenkin, Artifactory). Experience in GitOps based automation is Plus Experience with Github for Actions, Github for Security, GitHub CoPilot BE/B-Tech /MCA or any equivalent degree in Computer Science OR related practical experience. Must have 5+ years working experience in Jenkins, GCP (or AWS/Azure), Unix & LINUX OS Must have experience with automation/configuration management tool (Jenkins using Groovy scripting, Terraform, Ansible or an equivalent) Must have experience in Kubernetes (GKE, KUBECTL, HELM) and containers (Docker) Must have experience on JFrog Artifactory and SonarQube Extensive knowledge of institutionalizing Agile and DevOps tools not limited to but including Jenkins, Subversion, Hudson, etc. Experience on Networking Skills (TCP/IP, SSL, SMTP, HTTP, FTP, DNS and More) Hands on in source code management tools like Git, Bitbucket, SVN etc. Should have working experience with monitoring tools like Grafana, Prometheus, Elasticsearch, SPLUNK, or any other monitoring tools/processes. Experience on Enterprise High Availability Platforms and Network and Security on GCP. Knowledge and experience on Java programming language Experience working on large scale distributed systems with deep understanding of design impacts on performance, reliability, operations, and security is a big plus. Understanding of self-healing/immutable microservice based architectures, cloud platforms, clustering models, networking technologies. Great interpersonal and communication skills Self starter and be able to work well in a fast paced, dynamic environment with minimal supervision Must have Public Cloud provider certifications (Azure, GCP or AWS) Having CNCF certification is plus Other Information Travel: as required. Job is primarily performed in a Hybrid office environment. McCain Foods is an equal opportunity employer. We see value in ensuring we have a diverse, antiracist, inclusive, merit-based, and equitable workplace. As a global family-owned company we are proud to reflect the diverse communities around the world in which we live and work. We recognize that diversity drives our creativity, resilience, and success and makes our business stronger. McCain is an accessible employer. If you require an accommodation throughout the recruitment process (including alternate formats of materials or accessible meeting rooms), please let us know and we will work with you to meet your needs. Your privacy is important to us. By submitting personal data or information to us, you agree this will be handled in accordance with the Global Employee Privacy Policy Job Family: Information Technology Division: Global Digital Technology Department: I and O Project Delivery Location(s): IN - India : Haryana : Gurgaon Company: McCain Foods(India) P Ltd

Posted 4 days ago

Apply

4.0 years

0 Lacs

pune, maharashtra, india

On-site

Education & Experience A professional degree in Computer Science from a reputable institution, backed by a consistent academic record. A knack for problem-solving, data structures, and algorithms. Proficiency in ElasticSearch. 4+ years of hands-on development experience, primarily in building products for large enterprises. Exceptional communication skills. Mastery in Java programming; familiarity with Python is a plus.- Experience with Spring Boot. Practical knowledge of one or more cloud-based technologies (e.g., ElasticSearch, Storm, Hazelcast, MongoDB, Ceph, Kafka) is highly desirable. Expertise in building concurrent and/or parallelized, highly performant scalable applications. A track record of identifying and addressing complex issues in scalable deployments. Exposure to Service-Oriented Architecture (SOA) and Test-Driven Development (TDD) is an added advantage. Roles & Responsibilities Dive deep into technical aspects (Analysis, Design & Implementation) as required. Take complete ownership of features within the product. Engage in debates and detailed discussions about functional and non-functional requirements with our Product Management team. Collaborate with the team to design solutions, seeking stakeholder input before implementation. Create essential artifacts such as functional specifications and detailed designs for your assigned features. Implement intricate features with an unwavering commitment to quality, following the Test- Driven Development (TDD) process. Maintain open lines of communication, promptly reporting risks and progress to your supervising manager. Share your expertise and mentor team members. Provide support by troubleshooting and creating Root Cause Analysis (RCA) for production issues, subsequently working on short-term and long-term solutions. Skills: java,cloud-based technologies,problem-solving,service-oriented architecture (soa),mongodb,algorithms,j2ee,spring,kafka,elasticsearch,data structures,python,aws,spring boot,test-driven development (tdd)

Posted 4 days ago

Apply

5.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Founding Tech Engineer – Full Stack + AI/ML (Revit / AEC Software) 📍 In-office role | Chennai, India | Full-time About Us We are a US-based AI startup in the AEC (Architecture, Engineering & Construction) industry , building the first-ever AI product to revolutionize architectural detailing — one of the biggest time drains in architecture and engineering. Architects spend 60% of their time on detailing (wall sections, foundation drawings, construction documents) instead of design. Our product automates this, saving firms 30% of overall salaries and dramatically improving timelines. We are working at the intersection of: AI/ML, Generative AI, RAG (Retrieval-Augmented Generation), LLM applications BIM (Building Information Modeling), Revit, AutoCAD, Dynamo, Autodesk Construction Cloud SaaS, Cloud Infrastructure, and Modern Full-Stack Development We’re backed by an ex-McKinsey founder with deep AEC expertise and already validated by 50+ architects , with pilot users onboard. Our vision is to build the go-to AI software stack for the $50B+ AEC software market. Responsibilities Build and maintain scalable full-stack applications (frontend + backend) to power our AI product. Design, develop, and deploy AI-powered tools : RAG systems, LLM-based automation, recommender engines. Work directly with the Revit API, AutoCAD, Dynamo, BIM 360, and Forge/Autodesk Construction Cloud to create plugins and integrations. Implement and optimize vector search, embeddings, semantic search for architectural detail retrieval. Collaborate closely with founders, architects, and engineers to ship MVP features fast and iterate. Set up scalable infrastructure with AWS / GCP / Azure, Docker, Kubernetes, CI/CD pipelines. Ensure robust system design balancing speed-to-market with future scalability. Requirements Must Have: 5+ years of startup experience (fast shipping, small teams). Strong background in AI/ML : RAG, LLMs, NLP, embeddings, recommender systems. Experience with Revit/AutoCADD APIs, Revit plugins , or strong interest in BIM/AEC tech. Proven ability to ship end-to-end solutions independently. Technical Stack (experience in some/all): Programming Languages: Python, JavaScript, TypeScript, C#, .NET Frameworks: React, Node.js, FastAPI, Flask, Django Databases: PostgreSQL, MongoDB, Redis, ElasticSearch, Pinecone AI/ML Tools: LangChain, HuggingFace, OpenAI APIs, TensorFlow, PyTorch Cloud/DevOps: AWS (Lambda, S3, EC2), GCP, Azure, Docker, Kubernetes, CI/CD Why Join Us Better than industry compensation + ESOP/Equity (founding team upside). Opportunity to work on massive whitespace + technical greenfield problems. Exposure to the rare intersection of AI + AEC (Architecture, Engineering & Construction) . Fast-track growth: from early engineer → core product leader. Be part of a founding team , influencing product, culture, and strategy from Day 1. Work Setup 📍 In-office role in Chennai, India (startup culture, high-collaboration environment). 🌍 Working closely with global architecture and engineering firms in the US. 💡 If you’re a full-stack developer who thrives in startups, loves AI/ML, and is excited by AEC software — this is the role where you’ll build something that redefines an industry.

Posted 4 days ago

Apply

4.0 - 9.0 years

8 - 18 Lacs

bengaluru

Work from Office

Job Requirements Mandatory Skills Bachelors degree in computer science, Data Science, engineering, mathematics, information systems, or a related technical discipline 7+ years of relevant experience in data engineering roles Detailed knowledge of data warehouse technical architectures, data modelling, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures and hands-on SQL coding Proficient in at least one or more programming languages: Java, Python, Ruby, Scala Experienced with AWS services such as Redshift, S3, EC2, Lambda, Athena, EMR, AWS Glue, Datapipeline. Exposure to data visualization and reporting with tools such as Amazon QuickSight, Metabase, Tableau, or similar software Experience building metrics deck and dashboards for KPIs including the underlying data models. Understand how to design, implement, and maintain a platform providing secured access to large datasets Primary Roles and Responsibilities An AWS Data Engineer is responsible for designing, building, and maintaining the data infrastructure for an organization using AWS cloud services. This includes creating data pipelines, integrating data from various sources, and implementing data security and privacy measures. The AWS Data Engineer will also be responsible for monitoring and troubleshooting data flows and optimizing data storage and processing for performance and cost efficiency. Preferred Skills Masters degree in computer science, Data Science, engineering, mathematics, information systems, or a related technical discipline 7+ years of work experience with ETL, Data Modelling, and Data Architecture. Experience or familiarity with newer analytics tools such as AWS Lake Formation, Sagemaker, DynamoDB, Lambda, ElasticSearch. Experience with Data streaming service e.g Kinesis Kafka Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations Proven track record partnering with business owners to understand requirements and developing analysis to solve their business problems Proven analytical and quantitative ability and a passion for enabling customers to use data and metrics to back up assumptions, develop business cases, and complete root cause analysis

Posted 4 days ago

Apply

3.0 years

0 Lacs

hyderābād

On-site

Ninja Van is a late-stage logtech startup that is disrupting a massive industry with innovation and cutting edge technology. Launched 2014 in Singapore, we have grown rapidly to become one of Southeast Asia's largest and fastest-growing express logistics companies. Since our inception, we’ve delivered to 100 million different customers across the region with added predictability, flexibility and convenience. Join us in our mission to connect shippers and shoppers across Southeast Asia to a world of new possibilities. More about us: We process 250 million API requests and 3TB of data every day. We deliver more than 2 million parcels every day. 100% network coverage with 2600+ hubs and stations in 6 SEA markets (Singapore, Malaysia, Indonesia, Thailand, Vietnam and Philippines), reaching 500 million consumers. 2 Million active shippers in all e-commerce segments, from the largest marketplaces to the individual social commerce sellers. Raised more than US$500 million over five rounds. We are looking for world-class talent to join our crack team of engineers, product managers and designers. We want people who are passionate about creating software that makes a difference to the world. We like people who are brimming with ideas and who take initiative rather than wait to be told what to do. We prize team-first mentality, personal responsibility and tenacity to solve hard problems and meet deadlines. As part of a small and lean team, you will have a very direct impact on the success of the company. Roles and Responsibilities Work with Ninja Van clients and business stakeholders to understand their business processes and requirements, translating them into technical specifications for our engineering team. Grow and develop strong relationships with key stakeholders including big key partners as well as smaller shippers. Must be able to be credible with external technical/business stakeholders. Guide our Developer Support L1 & L2 agents to improve the quality of support provided to our shippers, for API integration support matters. Defining scope and influencing business solution decisions based on technical feedback and business strategy. Craft and manage roadmaps for products that would improve API integration experience for our shippers. Foreseeing and smoothing out dependencies and potential blockers, and keeping stakeholders engaged. Create ways of inspecting, organising and visualising project requests and pipeline. Requirements Bachelors Degree in Computer Science, or equivalent. 3 to 5 years experience in project management, with minimum 1 year delivering technical builds, software and applications. Excellent interpersonal skills, comfortable working with people of diverse backgrounds, good at managing both up and down, adept at resolving conflicts. Driven and resilient, leads by example with positive energy to get through tough times. Familiarity with API services or Kafka messaging is a bonus. Tech Stack Backend: Play (Java 8+), Golang, Node.js, Python, FastAPI Frontend: AngularJS, ReactJS Mobile: Android, Flutter, React Native Cache: Hazelcast, Redis Data storage: MySQL, TiDB, Elasticsearch, Delta Lake Infrastructure monitoring: Prometheus, Grafana Orchestrator: Kubernetes Containerization: Docker, Containerd Cloud Provider: GCP, AWS Data pipelines: Apache Kafka, Spark Streaming, Maxwell/Debezium, PySpark, TiCDC Workflow manager: Apache Airflow Query engines: Apache Spark, Trino Submit a job application By applying to the job, you acknowledge that you have read, understood and agreed to our Privacy Policy Notice (the “Notice”) and consent to the collection, use and/or disclosure of your personal data by Ninja Logistics Pte Ltd (the “Company”) for the purposes set out in the Notice. In the event that your job application or personal data was received from any third party pursuant to the purposes set out in the Notice, you warrant that such third party has been duly authorised by you to disclose your personal data to us for the purposes set out in the the Notice.

Posted 4 days ago

Apply

8.0 years

0 Lacs

hyderābād

On-site

Ninja Van is a late-stage logtech startup that is disrupting a massive industry with innovation and cutting edge technology. Launched 2014 in Singapore, we have grown rapidly to become one of Southeast Asia's largest and fastest-growing express logistics companies. Since our inception, we’ve delivered to 100 million different customers across the region with added predictability, flexibility and convenience. Join us in our mission to connect shippers and shoppers across Southeast Asia to a world of new possibilities. More about us: We process 250 million API requests and 3TB of data every day. We deliver more than 2 million parcels every day. 100% network coverage with 2600+ hubs and stations in 6 SEA markets (Singapore, Malaysia, Indonesia, Thailand, Vietnam and Philippines), reaching 500 million consumers. 2 Million active shippers in all e-commerce segments, from the largest marketplaces to the individual social commerce sellers. Raised more than US$500 million over five rounds. We are looking for world-class talent to join our crack team of engineers, product managers and designers. We want people who are passionate about creating software that makes a difference to the world. We like people who are brimming with ideas and who take initiative rather than wait to be told what to do. We prize team-first mentality, personal responsibility and tenacity to solve hard problems and meet deadlines. As part of a small and lean team, you will have a very direct impact on the success of the company. Roles & Responsibilities Design, develop and maintain Ninja Van’s infrastructure for streaming, processing and storage of data. Build tools for effective maintenance and monitoring of the data infrastructure. Contribute to key data pipeline architecture decisions and lead the implementation of major initiatives. Work closely with stakeholders to develop scalable and performant solutions for their data requirements, including extraction, transformation and loading of data from a range of data sources. Develop the team’s data capabilities - share knowledge, enforce best practices and encourage data-driven decisions. Develop Ninja Van’s data retention policies, backup strategies and ensure that the firm’s data is stored redundantly and securely. Requirements Bachelor’s or Master’s degree in Computer Science or related field from a top university. Solid Computer Science fundamentals, excellent problem-solving skills and a strong understanding of distributed computing principles. At least 8 years of experience in a similar role, with a proven track record of building scalable and performant data infrastructure. Expert SQL knowledge and deep experience working with relational and NoSQL databases. Advanced knowledge of Apache Kafka and demonstrated proficiency in Hadoop v2, HDFS, and MapReduce. Experience with stream-processing systems (e.g. Storm, Spark Streaming), big data querying tools (e.g. Pig, Hive, Spark) and data serialization frameworks (e.g. Protobuf, Thrift, Avro). [Good to have] Experience with infrastructure as code technologies like (Terraform, Terragrunt, Ansible, Helm). Don’t sweat it if you don’t have it, as long as it interests you! [Good to have] CDC technologies like Maxwell or Debezium. Tech Stack Backend: Play (Java 8+), Golang, Node.js, Python, FastAPI Frontend: AngularJS, ReactJS Mobile: Android, Flutter, React Native Cache: Hazelcast, Redis Data storage: MySQL, TiDB, Elasticsearch, Delta Lake Infrastructure monitoring: Prometheus, Grafana Orchestrator: Kubernetes Containerization: Docker, Containerd Cloud Provider: GCP, AWS Data pipelines: Apache Kafka, Spark Streaming, Maxwell/Debezium, PySpark, TiCDC Workflow manager: Apache Airflow Query engines: Apache Spark, Trino Submit a job application By applying to the job, you acknowledge that you have read, understood and agreed to our Privacy Policy Notice (the “Notice”) and consent to the collection, use and/or disclosure of your personal data by Ninja Logistics Pte Ltd (the “Company”) for the purposes set out in the Notice. In the event that your job application or personal data was received from any third party pursuant to the purposes set out in the Notice, you warrant that such third party has been duly authorised by you to disclose your personal data to us for the purposes set out in the the Notice.

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies