Jobs
Interviews

3148 Elasticsearch Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Data Engineer Experience: 2 - 4 Years Exp Salary: Competitive Preferred Notice Period : 30 Days Shift : 9:00 AM to 6:00 PM IST Opportunity Type: Office (Gurugram) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python and Airflow and Elasticsearch Trademo (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description What will you be doing here? Responsible for the maintenance and growth of a 50TB+ data pipeline serving global SaaS products for businesses, including onboarding new data and collaborating with pre-sales to articulate technical solutions Solves complex problems across large datasets by applying algorithms, particularly within the domains of Natural Language Processing (NLP) and Large Language Models (LLM) Leverage bleeding-edge technology to work with large volumes of complex data Be hands-on in development - Python, Pandas, NumPy, ETL frameworks. Preferred exposure to distributed computing frameworks like Apache Spark , Apache Kafka, Apache Airflow, Elasticsearch Along with individual data engineering contributions, actively help peers and junior team members on architecture and code to ensure the development of scalable, accurate, and highly available solutions Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirements: B-Tech/M-Tech in Computer Science from IIT or equivalent Tier 1 Colleges. 3+ years of relevant work experience in data engineering or related roles. Proven ability to efficiently work with a high variety and volume of data (50TB+ pipeline experience is a plus). Solid understanding and preferred exposure to NoSQL databases, including Elasticsearch, MongoDB, and GraphDB. Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, IBM , etc.). Exposure to core data engineering concepts and tools: Data warehousing, ETL processes, SQL, and NoSQL databases. Great problem-solving ability over a larger set of data and the ability to apply algorithms, with a plus for experience using NLP and LLM. Willingness to learn and apply new techniques and technologies to extract intelligence from data, with prior exposure to Machine Learning and NLP being a significant advantage. Sound understanding of Algorithms and Data Structures. Ability to write well-crafted, readable, testable, maintainable, and modular code. Desired Profile: A hard-working, humble disposition. Desire to make a strong impact on the lives of millions through your work. Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision What we offer: At Trademo, we want our employees to be comfortable with their benefits so they focus on doing the work they love. Parental leave - Maternity and Paternity Health Insurance Flexible Time Offs Stock Options How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Trademo is a Global Supply Chain Intelligence SaaS Company, headquartered in Palo-Alto, US. Trademo collects public and private data on global trade transactions, sanctioned parties, trade tariffs, ESG and other events using its proprietary algorithms. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Data Engineer Experience: 2 - 4 Years Exp Salary: Competitive Preferred Notice Period : 30 Days Shift : 9:00 AM to 6:00 PM IST Opportunity Type: Office (Gurugram) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python and Airflow and Elasticsearch Trademo (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description What will you be doing here? Responsible for the maintenance and growth of a 50TB+ data pipeline serving global SaaS products for businesses, including onboarding new data and collaborating with pre-sales to articulate technical solutions Solves complex problems across large datasets by applying algorithms, particularly within the domains of Natural Language Processing (NLP) and Large Language Models (LLM) Leverage bleeding-edge technology to work with large volumes of complex data Be hands-on in development - Python, Pandas, NumPy, ETL frameworks. Preferred exposure to distributed computing frameworks like Apache Spark , Apache Kafka, Apache Airflow, Elasticsearch Along with individual data engineering contributions, actively help peers and junior team members on architecture and code to ensure the development of scalable, accurate, and highly available solutions Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirements: B-Tech/M-Tech in Computer Science from IIT or equivalent Tier 1 Colleges. 3+ years of relevant work experience in data engineering or related roles. Proven ability to efficiently work with a high variety and volume of data (50TB+ pipeline experience is a plus). Solid understanding and preferred exposure to NoSQL databases, including Elasticsearch, MongoDB, and GraphDB. Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, IBM , etc.). Exposure to core data engineering concepts and tools: Data warehousing, ETL processes, SQL, and NoSQL databases. Great problem-solving ability over a larger set of data and the ability to apply algorithms, with a plus for experience using NLP and LLM. Willingness to learn and apply new techniques and technologies to extract intelligence from data, with prior exposure to Machine Learning and NLP being a significant advantage. Sound understanding of Algorithms and Data Structures. Ability to write well-crafted, readable, testable, maintainable, and modular code. Desired Profile: A hard-working, humble disposition. Desire to make a strong impact on the lives of millions through your work. Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision What we offer: At Trademo, we want our employees to be comfortable with their benefits so they focus on doing the work they love. Parental leave - Maternity and Paternity Health Insurance Flexible Time Offs Stock Options How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Trademo is a Global Supply Chain Intelligence SaaS Company, headquartered in Palo-Alto, US. Trademo collects public and private data on global trade transactions, sanctioned parties, trade tariffs, ESG and other events using its proprietary algorithms. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

As an SDE 3 you will be responsible for solving complex problems, elevating engineering and operational excellence, and leading new tech discovery and adoption. You will ensure high standards of code and design quality, mentor junior developers, and proactively manage technical risks to ensure successful project delivery. Responsibilities: Solving complex, ambiguous problems at a team level. Up the bar for Engineering Excellence. Up the bar for Operational excellence at the team level. New Tech discovery for the team. New Tech Adoption within the team. Custodian of code and design quality of the team. Coach SDE1s and SDE2s within the team. Proactively identify tech risk and de-risk projects in the team. Bring a culture of learning and innovation to the team. Builds a platform that improves the MTTD and MTTR. Create solutions to a vision statement. Analyze and improve system performance. Guide the team on coding patterns, languages, and frameworks. Requirements: Btech/ Mtech CSE From Tier 1 College. Computer Science fundamentals, object-oriented programming, design patterns, data structures, and algorithm design. Proficiency with Java stack (Java/Java Design Patterns ). Building scalable microservices and distributed systems. 5+ years of experience contributing to architecture and design in a product setup. Total work experience of 7 to 10 years in contributing to architecture and design in a product setup. Technology/ Tools: Spring, Hibernate, RabbitMQ, Kafka, Zookeeper, Elasticsearch. REST APIs. Database: Cassandra, MongoDB, Redis, MS-SQL, MySQL. Hands-on experience in working on a large scale. Hands-on experience in Low- and High-Level Design ( LLD + HLD ). Proficient in multiple programming languages and technology stacks. Expert in doing high-level design. Expert in CI/CD capabilities required to improve efficiency.

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Software Full Stack Developer As a Fullstack SDE1 at NxtWave, you Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. Build reusable, maintainable frontend components using modern state management practices. Develop backend services in Node.js or Python, adhering to clean-architecture principles. Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. Containerize applications and configure CI/CD pipelines for automated builds and deployments. Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills 1+ years of experience building full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. Hands-on with state management patterns (Redux, MobX, or custom solutions). Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). Expertise in designing REST and/or GraphQL APIs and integrating with backend services. Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ‘The Greatest Brand in Education’ in a research-based listing by URS Media NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job description: Purpose of the Job The candidate for Manager – DevOps / Release Management is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environment. This is a hands-on leadership role, requiring the ability to directly contribute to the implementation and optimization of CI/CD pipelines, infrastructure automation, and release management practices. Job Description Design, implement, and manage robust CI/CD pipelines and infrastructure automation frameworks for applications and data services. Oversee and execute release management processes across environments, ensuring governance, repeatability, and quality. Lead the development and maintenance of infrastructure as code and config management tools (e.g., Terraform, Ansible). Improve efficiency of Release Management process, with a focus on quality and minimizing post-production issues. Proactively monitor production and non-production systems to ensure high availability, scalability, and performance. Identify and resolve deployment issues in real time and implement monitoring, alerting, and rollback mechanisms. Collaborate with development, QA, and product teams to support seamless integration and deployment of features. Guide and mentor junior DevOps engineers, instilling DevOps best practices and reusable frameworks. Drive DevSecOps adoption, integrating security checks and compliance into the release lifecycle. Stay current on industry trends and continuously assess tools, frameworks, and approaches that improve operational efficiency. Manage hybrid and cloud-native deployments, with a strong focus on AWS, while supporting Azure and on-prem transitions. Own technical documentation and process design for release pipelines, deployment architecture, and system automation. Help transform the Release Management function from the ground up, including strategy, team development, tooling, and governance. Job Requirements - Experience and Education Bachelor's degree in Computer Science, Engineering, or a related technical field. Minimum of 10+ years of experience in DevOps, Site Reliability Engineering, or Release Management roles. Strong hands-on experience with CI/CD tools such as Jenkins, TeamCity, GitHub Actions, Octopus Deploy, or similar. Proven experience in cloud platforms: AWS (preferred), Azure, or OCI. Skilled in scripting and automation (Python, Shell, Bash) and infrastructure as code (Terraform, CloudFormation). Experience managing release workflows, branching strategies, versioning, and rollbacks. Working knowledge of containerization and orchestration (e.g., Docker, Kubernetes, ECS). Familiarity with monitoring/logging tools (e.g., Datadog, Prometheus, ELK Stack). Strong communication and stakeholder management skills, capable of cross-functional collaboration. Experience working with distributed systems and data platforms (e.g., Elasticsearch, Cassandra, Hadoop) is a plus. Leadership Behaviors Building Outstanding Teams Setting a clear direction Simplification Collaborate & break silos Execution & Accountability Growth mindset Innovation Inclusion External focus Skills

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: DevOps Engineer Location: Gurugram (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert, you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow, Prefect, or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools: MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving: TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking: MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms: AWS, Azure, or GCP with relevant certifications • Containerization: Docker, Kubernetes (CKA/CKAD preferred) • CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC: Terraform, CloudFormation, Pulumi, Ansible • Monitoring: Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools: Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time: Apache Kafka, Apache Spark, Apache Flink, Redis • Databases: PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing: Snowflake, BigQuery, Redshift, Databricks • Data Versioning: DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security: Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing: Experience with petabyte-scale data processing and real-time analytics • Performance Optimization: Advanced system optimization, distributed computing, caching strategies • API Development: REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams, product managers, and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics, SLAs, and enterprise ROI Growth Opportunities • Career Path: Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth: Work with cutting-edge enterprise AI/ML technologies • Leadership: Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure: Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime: Maintain 99.9%+ availability for enterprise clients • Deployment Frequency: Enable daily deployments with zero downtime • Performance: Ensure optimal response times and system performance • Cost Optimization: Achieve 20-30% annual infrastructure cost reduction • Security: Zero security incidents and full compliance adherence Business Impact • Time to Market: Reduce deployment cycles and improve development velocity • Client Satisfaction: Maintain 95%+ enterprise client satisfaction scores • Team Productivity: Improve engineering team efficiency by 40%+ • Scalability: Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company: Highspot Website: Visit Website Business Type: Small/Medium Business Company Type: Product Business Model: B2B Funding Stage: Series D+ Industry: Business/Productivity Software Job Description About Highspot Highspot is a leading AI-powered sales enablement platform trusted by global enterprises like DocuSign, Siemens, and FedEx etc... We unify content management, sales playbooks, training, coaching, and buyer engagement into a single intelligent system—helping go-to-market teams drive consistent execution and measurable revenue growth. With over $645M in funding from top investors including Tiger Global, ICONIQ, B Capital, and Salesforce Ventures, Highspot is redefining how sales teams perform at scale. About The Role As a Sr. Software Engineer in Search, you will leverage the latest innovation in search / AI technology to improve search relevancy, ranking and recommendations. You will collaborate with key stakeholders (including other product teams and customer-facing teams) to analyze search-related issues and requirements to architect and develop efficient and scalable solutions. You will review, analyze, maintain, and optimize existing search implementations. You will be working with a mix of traditional keyword search techniques, hybrid techniques leveraging embeddings, and other uses of AI technology to optimize results for users. You will rigorously measure and drive the quality of the results. We're looking for highly motivated individuals who work well in a team-oriented, iterative, and fast-paced environment. Responsibilities Develop highly-available distributed services, including improving existing systems. Experiment with different techniques to improve search quality, relevance and experiences. Help design and develop new search features and functionality. Partner cross-functionally with UX and Product Management to create end-to-end experience that customers love Write maintainable and testable code Contribute to internal and external technical documentation Solve problems relating to mission critical services and build automation to prevent problem recurrence Required Qualifications 6+ years of relevant professional experience, not including internships/co-ops Strong understanding of enterprise search and search architecture. Solid experience with search and related technologies such as Solr, ElasticSearch, Lucene. Demonstrable experience with schema design, relevancy tuning, boosting and optimization. Experience working with cloud services, preferably AWS, Azure. Experience with search ranking and machine learning is a big plus Why Join Us? At Highspot , you’ll work on cutting-edge technology that transforms the way sales and marketing teams operate globally. You’ll be part of a high-growth, high-impact culture that values innovation, autonomy, and collaboration. With our strong product-market fit, industry-leading funding, and a passionate team, this is your chance to be part of something big—and grow with it.

Posted 3 weeks ago

Apply

0 years

6 - 18 Lacs

India

On-site

We are hiring talented and motivated engineers to join our LLM and Generative AI team. You will contribute across the lifecycle of AI agent development—data engineering, LLM fine-tuning, RAG-based retrieval, evaluation frameworks, and deployment integration. This is an opportunity to work hands-on with open-source models like Llama, integrate them with real-world enterprise systems, and build intelligent, modular Agentic AI applications. Key Responsibilities: LLM Model Engineering: Fine-tune and evaluate large language models (e.g., Llama 2/3, Mistral) using curated datasets for specific enterprise tasks. RAG & Contextual Memory: Build Retrieval-Augmented Generation (RAG) pipelines using vector databases (e.g., ElasticSearch, FAISS) to enhance factual grounding. Data Pipeline Development: Collect, clean, and structure domain-specific datasets (structured and unstructured) for training and evaluation. Agentic Architecture Integration: Contribute to the design and orchestration of AI agents with modular skills for planning, dialogue, and retrieval. Voice & Multimodal Interfaces (Optional): Work with TTS/ASR stacks (e.g., Whisper, NeMo) to integrate GenAI into multilingual voice agent pipelines. Evaluation & Risk Tracking: Implement evaluation metrics for task performance, drift detection, and hallucination control. Collaboration & Review: Work with senior AI engineers, product managers, and domain experts to translate requirements into deployed systems. Continuous Learning: Stay current with advancements in GenAI, open-source ecosystems, and governance practices (e.g., NIST AI RMF, EU AI Act). Qualifications: B.Tech/M.Tech in Computer Science, AI/ML, Data Science, or a related field from a reputed institute. Strong fundamentals in machine learning, NLP, and Python-based development. Familiarity with open-source LLM frameworks (e.g., HuggingFace Transformers, LangChain, LlamaIndex). Exposure to vector search and embeddings (e.g., OpenAI, SBERT, or in-house models). Bonus: Knowledge of Docker, Streamlit, REST APIs, or voice stack (Whisper, NeMo). Why Join Us? Work on real-world GenAI applications deployed across enterprise systems. Contribute to building India's first integrated Agentic AI platform. Be part of a fast-growing team with deep expertise in AI engineering, fine-tuning, and voice/document intelligence. Opportunity to work on multi-modal AI and governance-aligned AI systems from day one. Job Type: Full-time Pay: ₹50,000.00 - ₹150,000.00 per month Benefits: Health insurance Work Location: In person Expected Start Date: 19/08/2025

Posted 3 weeks ago

Apply

5.0 years

5 - 10 Lacs

Bengaluru

On-site

DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description About CyberArk : CyberArk (NASDAQ: CYBR), is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads and throughout the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets. To learn more about CyberArk, visit our CyberArk blogs or follow us on X, LinkedIn or Facebook. Job Description CyberArk is seeking a SRE Cloud Engineering Architect looking to bring their knowledge, excitement, and energy to the team. If you have worked in the cloud solving scale problems, bringing visibility into your platform and accomplishing true CI/CD pipelines we want you on the team! Driven and excited to innovate is what we need all while allowing you to grow professionally and creating strong relationships that will last a lifetime. Responsibilities: Design Implementation of AWS infrastructure components such as VPCs, EC2, EKS, S3, tagging schemes, CloudFormation, etc. Architecture of deployment and management automation of cloud-based infrastructure and software Architecting the use of configuration management tools in both Windows and Linux - Cloudformation, Helm, Terraform, Salt, Ansible Ensuring cloud-based architectures meet availability and recoverability requirements Architecture and implementation of cloud-based monitoring, alerting and reporting – Datadog, Logz.io, InfluxDb, CloudWatch, Catchpoint, ELK, Grafana Support and guidance on tooling that helps to enable teams for greater output and reliability. Deep understanding of the latest tech solutions, trends, and ability to dive into the details of the architecture as needed. Work with the Team Leads within the group to identify areas of improvement, prepare architecture road maps, and advocate to the Product Management group. Qualifications B.S./B.E. in Computer Science or equivalent experience Minimum 4 years of experience managing AWS infrastructure Minimum of 10 years of experience with systems engineering and software development Expert understanding/experience of containerization services such as Docker/Kubernetes Expert in tools such as Datadog, InfluxDb, Grafana, Logstash, Elasticsearch Solid understanding/experience of web services, databases and relating infrastructure/architectures Solid understanding of backup/restore best practices Strong level of expertise programming in C# / C++ / Java / Python or equivalent language Excellent Troubleshooting Skills Experience supporting an enterprise-level SaaS environment Security Experience a plus

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About jhana : jhana is an early stage, seed-funded homegrown artificial legal intelligence lab, established at Harvard in February 2022 and made in India. jhana builds datasets, agents, and interfaces that help produce legal work faster and with higher fidelity. jhana's suite of product offerings use AI to streamline workflows across the legal field including litigation, corporate law and in-house counsel, and the judiciary. jhana was recently declared the most outstanding new legal tech in Asia and Oceania by ALITA, the Asia-Pacific Legal Innovation & Technology Association. About the role : This role will be part of an agile full-stack engineering team, responsible for the maintenance, stability, and improvement of the entire jhana.ai service. This teams primary tasks will include : integrating new product offerings; rehauling UI to optimize user experience; expanding the offered databases; closing all bug reports in a timely manner; maintaining the server and the microservices that support it; and more. All members of this team will work across diverse repositories and technologies in multiple languages as necessary. The day-to-day : This is an intermediate role distributed across research, engineering, and product. Relatedly, it is a combination of invention and optimization. These are the problem statements that this role will likely continue or begin our work on Develop, test, and maintain full-stack applications, with knowledge of Python and JS/TS-based frameworks appreciated. Low-level understanding of relational databases and SQL or similar technologies. Implementing scale and optimization for billion-scale ElasticSearch, and experience working with large ElasticSearch systems. Understanding of deploying and maintaining containerized and concurrent pipes on AWS with familiarity and experience using tools like Docker and Kubernetes. Collaborate closely with UX/UI designers to implement MaterialUI-based designs and ensure a seamless user experience. Contribute to building and enhancing RAG (Retrieval Augmented Generation) pipelines and search stacks. Continuously optimize applications for performance, scalability, and security. Participate in code reviews, troubleshooting, and debugging to ensure high-quality software delivery. Skills & Qualifications : Required Skills : Experience: Looking for 2+ years of experience Frontend : Proficiency in JavaScript or Typescript. Backend : Strong experience with API development and RESTful services, ideally with substantial experience in Django/Python. Databases : Solid knowledge of SQL and database design. Cloud : Hands-on experience with AWS for deploying and managing cloud-based applications. UI/UX : Experience with MaterialUI and collaborating on responsive, user-centric designs. Added Bonuses Containerization : Proficiency with Docker for containerized development and deployment. Search Technologies : Familiarity with ElasticSearch and implementing search-based features. LLM Knowledge : Understanding and hands-on experience working with Large Language Models (LLMs) in applications. Vector Databases : Experience with vector databases, especially FAISS or Milvus, and how they integrate with machine learning systems. Search Stacks & Agents : Previous experience building search stacks, agents, or intelligent information retrieval systems. RAG Pipelines : Knowledge of building and enhancing Retrieval Augmented Generation (RAG) pipelines. Design Tools : Experience creating designs or wireframes using Figma or other design tools. About the Team : We are a public benefit corporation headquartered in Bangalore. We operate in rapidly changing legal systems with awareness of the stakes at hand. Our intention is to in?uence beneficence and alignment into the technological systems that are augmenting and replacing human institutions. Our team spans diverse identity and training, from physics and mathematics to law and public policy. We are small, fast-moving, horizontally at and built on collaboration between lawyers, academics, and engineers. We ship fast, and every line of code our team writes has a >0.9 expectation of making it to production. What we offer : Competitive salary and benefits package. Opportunities for growth and professional development. ESOPs and ownership. High potential for vertical and horizontal growth. Collaborative and dynamic work environment. The chance to work with cutting-edge technologies and make a real impact on a high-stakes industry with a transformative impact on human lives and commercial economies.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

The ChatBot Engineer role involves progressing the development and support of the Chatbot platform to create solutions that better serve the business and ultimately the Customer. You are expected to participate in all areas of the Chatbot team, working with other team members to identify challenges, research solutions, propose approaches for discussion, conduct pre-defined pilots, and support solutions through to implementation across the Chatbot platform. Through paired programming activities, you will support the evolution of the Chatbot service with an infrastructure as code philosophy, ensuring knowledge of best practice techniques are understood. Your role will align with industry standard cloud principles, encouraging and supporting the Delivery teams to use the platform while assisting in 2nd & 3rd line troubleshooting where necessary. Hands-on engineering skills in at least 1 programming language are required for this role. Your key accountabilities include ensuring that the Chatbot platform remains supportable, flexible, and aligned to the original design operative principles. You will provide 2nd line operational & engineering platform delivery support for all Chatbot services & environments. This includes supporting & troubleshooting key areas of the Chatbot stack and may involve being a member of the on-call 24 * 7 rota. You are expected to get involved in all areas of the Chatbot Product, such as inception/stand-ups/retrospectives, taking the initiative to own varied stories across the entire Chatbot stack through to completion. Working closely with other Chatbot Engineers, you will deliver solutions through paired programming activities & test-driven development, while taking the initiative to develop further programming skills & best practice capabilities. Your role will involve working through the challenge of engineering innovative solutions to a desired automatic immutable end state with other Chatbot engineers, relying on low cost, low maintenance open-source technologies aligned to the Chatbot Product Roadmap as agreed with the Chatbot Platform Owner. You are expected to keep up to date with the latest news within the conversational chat community and within your original area of expertise to further support the Chat Platform Owner in evolving the Chatbot service. As a Cloud Engineer with 6-9 years of experience (Expert Engineer), you are required to have experience in software delivery in agile methodologies. Deep knowledge of AWS and its various services, primarily EC2, VPC, IAM, Serverless offerings, RDS, R53, CloudFront, Elasticsearch is essential. A strong hold on networking core concepts, understanding of Serverless Architecture and AWS offerings, and proficiency with Terraform and/or CloudFormation Core concepts are crucial. Hands-on experience with Unix Scripting, Python, CI-CD tools like Jenkins, Code Build, Code Pipeline, and good experience with containers technologies will be advantageous. Working knowledge of APIs, caching, messaging, and majoring in at least one of the programming languages/frameworks such as NodeJS & Python is required. Knowledge of Javascript/Typescript, TDD and pair programming best practices with CI/CD pipelines, strong communication skills, and interest in a pair-programming environment are also important. Industry: IT/Computers-Software Role: Chatbot Cloud Expert Engineer Key Skills: Serverless computing, Python, Virtualization, Cloud Services, VPC, Chatbots, Serverless Architecture, Javascript, Nodejs, Cloud, Unix Scripting, Elasticsearch, EC2, Jenkins, CI/CD, Terraform, Typescript, Chatbot Development, AWS Education: B.E/B.Tech Email ID: Resume.Augusta@augustainfotech.com,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Lead Techno Functional Architect for Blue Yonder Warehouse Management Systems (SaaS / On Premise) implementation/Optimization, your core responsibility would be to focus on designing best-in-class solutions using a suite of products in the Warehouse Management space. You will be responsible for driving the design, optimization, and implementation of Blue Yonder Warehouse Management software solutions for customers across all verticals. Your role will involve being the techno functional liaison across key stakeholders, including key customer contacts, Project Managers, Technical Consultants, and 3rd Party Consultants, to ensure the timely and successful delivery of the final solution. You will work with a current technical environment that includes software such as Java, Springboot, Gradle, GIT, Hibernate, Rest API, OAuth, and application architecture that is scalable, resilient, event-driven, and secure multi-tenant Microservices architecture. Additionally, you will be involved in cloud architecture using MS Azure and various frameworks such as Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, Springboot, Gradle, GIT, Ignite. Your responsibilities will include leading solution design throughout all phases of the project life cycle, integrating systems with other platforms using API solutions, conducting system integration and user acceptance testing, and ensuring a seamless transition to steady-state associates. You will also be expected to assess business needs, identify revenue opportunities, communicate customer-specific needs to the Product team, and provide necessary data visibility based on customer requirements. The ideal candidate should have a minimum of 5+ years of experience in the implementation of Warehouse Management Systems at Tier 1 warehouses and possess exceptional communication, organization, and analytical skills. You should demonstrate a strong ability to problem-solve, develop innovative solutions, work in a fast-paced environment, and adapt to changing priorities. Strong business analysis skills, the ability to prepare and deliver presentations, and excellent interpersonal skills are also required. An advanced degree in Industrial Engineering, Supply Chain, Operations, Logistics, or a related field is preferred, although a Bachelor's degree with relevant industry experience may also be considered. If you align with our values and have a desire to contribute to a diverse and inclusive environment, we welcome you to explore this opportunity to join our team as a Lead Techno Functional Architect for Blue Yonder Warehouse Management Systems.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Greetings from Ubique Systems! We are currently looking to hire an AI/ML Developer with a focus on OCR (Optical Character Recognition). We prefer candidates who can join immediately or within 30 days. This is a remote position requiring a minimum of 5 years of relevant experience in the field of AI/ML. As an AI/ML Developer at Team Computers, you will be responsible for building, deploying, and enhancing AI/ML models with a special emphasis on OCR. You will work on AI applications within a challenging supply chain environment. Your key responsibilities will include taking end-to-end ownership of AI/ML model development, translating real business challenges into AI solutions, leading OCR-based projects, and utilizing tools such as TensorFlow, Keras, PyTorch, GluonTS, and Darts. Collaboration with Data Engineers and integration of solutions with Azure Cloud will also be part of your role. The ideal candidate should possess strong Python skills, a deep understanding of ML algorithms, NLP, and Deep Learning, along with knowledge of OCR, neural networks, and model deployment. Hands-on experience with AI frameworks like TensorFlow, Keras, and PyTorch is essential. Exposure to Azure, SQL/Mongo/ElasticSearch, good communication skills, and the ability to articulate project experiences in detail are also required. Candidates with experience in Supply Chain or Transportation, as well as exposure to C#/.NET, will be given preference. The interview process will be conducted on a rolling basis, and early applications are encouraged. If you are an immediate joiner with a passion for AI/ML and OCR, please feel free to DM us or share your updated resume at [nikita.bhattacharya@ubique-systems.com]. Join us and be part of our exciting AI/ML journey! #Hiring #AIML #MachineLearning #OCR #Python #ImmediateJoiner #RemoteJob,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Senior Software Engineer on the Reporting Analytics team at Medallia, you will be responsible for delivering products that enable customers to access advanced analytics reports within the Medallia Experience Cloud platform. You will be an integral part of a full-stack, cross-functional team that focuses on building and maintaining core data analysis tools and APIs. Your main focus will be on backend engineering, where you will design and develop performant, scalable, and robust GraphQL APIs for Medallia's flagship product. This role offers you the opportunity to tackle challenging technical and statistical problems at scale, take ownership of your products end-to-end, and contribute significantly to the evolution of our SaaS platform. At Medallia, we foster a culture of continual learning, collaboration, and innovation. We value smarts, kindness, and a proactive approach to identifying and addressing pain points within our infrastructure and code base. Our global engineering team stays abreast of the latest technologies, encourages experimentation, and builds strong relationships with colleagues worldwide. Key Responsibilities: - Design, implement, and drive the technical roadmap for the Reporting Analytics team - Collaborate closely with other engineering teams to ensure seamless product deliveries and uninterrupted service - Mentor and coach junior engineers, and contribute to team growth through tech talks and presentations - Work in an agile environment with daily standups, code reviews, and research spikes - Take ownership of specific components and become a subject matter expert within the team - Ship scalable and robust software, maintain its performance, and continuously enhance its functionality Qualifications: Minimum Qualifications: - 5+ years of experience in software development - Proficiency in Java and experience building enterprise-grade, distributed systems - Strong understanding of REST and GraphQL APIs - Familiarity with software engineering fundamentals and SQL databases Preferred Qualifications: - BS/MS in Computer Science, Engineering, or related field - Agile process experience and ability to work in globally distributed teams - Background in statistical analysis and experience with NO-SQL distributed databases - Knowledge of Elasticsearch, Lucene, and experience with performance optimization tools - Experience in implementing automated testing platforms and unit tests - Analytical mindset and strong problem-solving skills - Focus on quality and ownership mentality in all tasks This role is based remotely in Pune, and candidates are required to reside within the Pune metropolitan area. We do not offer relocation support at this time. Join us at Medallia and be part of a team that empowers exceptional individuals to create extraordinary experiences together.,

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Delhi, India

Remote

About HighLevel: HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. About the Role: With product and customer adoption accelerating, we’re unifying three critical pillars—Front-End Platform, Data Platform, and Core Platform/Infrastructure—under a leader to drive consistency, reliability, and velocity. You will define the strategy, mentor the team, and own the platform roadmap from developer experience to production uptime while leading our new Data Residency initiative to keep customer data within required geopolitical boundaries. Ideal Candidate Profile: A Technical Strategist: You have deep, hands-on experience with modern cloud-native ecosystems. You are not expected to code daily, but you can comfortably lead technical discussions on topics like: Cloud Providers: AWS, GCP, or Azure Infrastructure as Code: Terraform, Pulumi, or CloudFormation Containerization: Kubernetes, Docker CI/CD: Jenkins, GitLab CI, GitHub Actions, or similar Data Technologies: Experience with data warehousing (e.g., Snowflake, BigQuery) and data orchestration (e.g., Airflow, dbt, Dagster) Frontend Ecosystem: A solid understanding of the challenges in modern frontend development A Product Thinker: You have a customer-centric approach and experience treating internal platforms as products. You know how to build a roadmap, prioritize effectively, and communicate with your users. An Excellent Communicator: You can articulate a complex technical vision or strategy to both technical and non-technical stakeholders, generating buy-in and excitement across the organization. Responsibilities: Platform Vision & Strategy: Craft and socialize a 12–18-month roadmap that aligns business goals with engineering velocity. Prioritize “paved roads” for micro-frontends, micro-services, data pipelines, and infra services, and multi-region deployments to satisfy data-residency commitments Front-End Platform: Build frameworks to drive consistency with reusable components and quality gates for Vue/TypeScript apps; eliminate repeated boilerplate and cut mean setup time Data Platform & Residency: Standardize data ingestion, governance, lineage, and observability across MongoDB, Firestore, and Elasticsearch; introduce contract testing to guarantee schema compatibility. Roll out a data-residency architecture (e.g., multi-regional clusters, customer-pinning, encryption key isolation) that meets EU, US, and APAC requirements Core Infra & Cloud: Own GKE clusters, networking, WAF/CDN, secrets, and Terraform/IaC; and cloud-cost optimization DevEx & Reliability: Champion GitHub + Jenkins pipelines, progressive delivery, chaos experiments, and golden-path logging/open-telemetry standards Security, Compliance & Data Residency: Partner with Security to embed SOC 2/HIPAA controls, shift-left scanning, and policy as code and regional compliance playbooks (GDPR, CCPA, PDPB, etc.) together with Legal/Security People Leadership: Coach & grow the team of engineers (platform, SRE, data) to a high-trust, high-ownership culture Stakeholder Communication: Translate platform metrics (lead-time, change-failure-rate, MTTR, cost) into actionable narratives for Engineering, Product, and Exec teams Requirements: 15+ years total engineering experience, 5+ years leading platform/SRE/cloud teams for SaaS at scale Proven success running multi-disciplinary platforms (frontend, data, infra) on a major cloud (GCP preferred) and Kubernetes Hands-on depth with TypeScript/Node, container orchestration, Terraform/Helm, service meshes, and event-driven architectures Demonstrated delivery of data-residency or multi-region architectures—experience with GDPR-compliant EU clusters, US-only deployments, or similar Track record of instituting CI/CD, contract testing, observability (Prometheus/Grafana), and chaos engineering Comfort with regulated environments (SOC 2, HIPAA, or similar) Excellent people-leadership and cross-functional communication skills; able to influence from board-level vision to code-level reviews. Foster a strong, inclusive engineering culture of ownership, collaboration, and operational excellence Bonus Points: Experience managing managers and leading a multi-layered engineering organization Experience with FinOps and driving cloud cost optimization initiatives Familiarity with Vue, Vite, and monorepo tooling EEO Statement: At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineer Developer - Analyst at Goldman Sachs in Bengaluru, your role encompasses the discipline of Site Reliability Engineering (SRE). SRE combines software and systems engineering to construct and manage large-scale, fault-tolerant systems. In this position, you are entrusted with the critical responsibility of ensuring the availability and reliability of the firm's platform services to meet the needs of both internal and external users. Collaboration with business stakeholders is a key aspect of your work to develop and sustain production systems that can adapt swiftly to the dynamic global business landscape of the organization. The SRE team focuses on the development and maintenance of platforms that facilitate adherence to Observability requirements and SLA Management by GS Engineering Teams. Your responsibilities include the design, development, and operation of distributed systems that offer observability for Goldman's mission-critical applications and platform services across on-premises data centers and various public cloud environments. The team's core functions involve the provision of tools for alerting, metrics and monitoring, log collection and analysis, as well as tracing. These tools are utilized by numerous engineers daily, emphasizing the paramount importance of reliability in system features. In your role, you will collaborate with internal stakeholders, vendors, product owners, and fellow SREs to conceptualize and implement a large-scale distributed system capable of managing alert generation, metrics collection, log collection, and trace events efficiently. Operating in a production environment spanning cloud and on-premises data centers, you will be instrumental in defining observability features and spearheading their execution. Basic qualifications for this role include a minimum of 2 years of relevant work experience and proficiency in languages such as Java, Python, Go, JavaScript, and the Spring framework. Additionally, expertise in using Terraform for Infrastructure deployment and management, along with strong programming skills encompassing code development, debugging, testing, and optimization, are essential. A solid background in algorithms, data structures, and software design, coupled with experience in distributed systems design, maintenance, and troubleshooting, is highly valued. Preferred experience for this role includes familiarity with cloud-native solutions in AWS or GCP, working knowledge of tools like Prometheus, Grafana, and PagerDuty, and experience with databases such as PostgreSQL, MongoDB, and Elasticsearch. Proficiency in open-source messaging systems like RabbitMQ and/or Kafka, as well as hands-on systems experience in UNIX/Linux and networking, especially in scaling for performance and debugging complex distributed systems, is advantageous.,

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title - Principal / Senior Software Engineer (Java) Job Location - Baner, Pune, Maharashtra Domain - Security About the Role - Are you a passionate Software Engineer who has a proven track record of solving complex problems and being at the forefront of innovation? Pursuing a career at our client will allow you to write code and manipulate data in ways that have never been done before, driving automation of threat detection and response for one of the world’s fastest growing industries. You will lead the creation, testing, and deployment of cutting-edge security technology to enterprise customers across the globe. Above all else, this role will allow you to work and learn from some of the most talented people in the business as well as have a direct contribution to the growth and success of the company. The everyday hustle: Research and develop creative solutions across a wide range of cutting-edge technologies to continuously evolve our client's platform. Create REST API’s and integrations between various products to improve and automate our customer’s threat detection. Manage the continuous integration and deployment processes of complex technologies. Perform code reviews to ensure consistent improvement. Proactively automate and improve all stages of the software development lifecycle. Interface closely with various parts of the business, both internally and externally, to ensure all users are leveraging the product with ease and to its full potential. Provide training and support to other team members as well as cultivate a culture of constant collaboration. Do you have what it takes? 5+ Years of Software Development experience in Java, Springboot, microservices. Must be proficient in the English language, both written and verbal Knowledge and ability to apply application security principles to our software process What makes you uncommon? Hands on experience with one or more of the following technologies (Elasticsearch, Kafka, Apache Spark, Logstash, Hadoop/hive, TensorFlow, Kibana, Athena/Presto/BigTable, Angular, React). Experience with cloud platforms such as AWS, GCP, or Azure. Solid understanding of unit testing, continuous integration and deployment practices. Experience with Agile Methodology. Higher education/relevant certifications.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are seeking a hands-on Infrastructure Engineer to join our team responsible for architecting, engineering, and evolving our firm’s Linux environment. While this role will be focused on the operating system itself, we are also looking for generalists with broad exposure to supporting technologies. The ideal candidate will understand the interaction between the operating system, the virtualization environment, supporting CI/CD tooling such as Git and Jenkins, as well as the network layer in between. We need engineers with an automation first mindset - those that seek to solve problems with code and try to never deal with the same problem twice! We are a highly tech focused firm, and developer best practice and tooling impacts the majority of our departments and teams. In this pivotal role, you will be facing every aspect of our business, from internal technologists to trading team members. You will develop deep expertise in the tools we leverage and then help others to find how to efficiently use the right tools in their own workflows. This will include being familiar with our extensive AI LLM tooling both for answering user questions as well as leveraging APIs for enabling AI-powered developer tools and coding assistants within IDEs. You will need to demonstrate and support containerization tooling as part of both local development as well as deployment to Kubernetes and cloud. You will form relationships with technologists and developers across the firm, discover novel use cases, engineer and integrate product enhancements, and help promote best practices internally. Key Responsibilities And Required Skills Architect, engineer, and support Linux environments (ideally with Redhat specific knowledge), in a multi-user, multi-server setup leveraging central authentication (Active Directory/LDAP) Act as an escalation point for complex infrastructure and user issues, providing expert troubleshooting and root cause analysis. Automate system administration and operational tasks using Bash, Python, and Ansible Manage and optimize containerization workflows (Docker, Podman, Kubernetes) Integrate and troubleshoot authorization and authentication technologies (Active Directory, Kerberos, LDAP, Microsoft Azure SSO) Support and enhance our Jenkins and Git tooling as part of our GitOps build and deployment CI/CD pipelines Collaborate with global teams to deliver a best-in-class development and research platform, while explaining solutions to both technical and non-technical colleagues Proactively adopt and champion the use of AI/LLM tools for code generation, documentation, troubleshooting, and automation, and then help to demonstrate these best in class solutions to our users Qualifications/Skills Desired Familiarity with Hashicorp Packer for image orchestration and lifecycle management Experience with security hardening of Linux operating systems Experience with PowerShell scripting for Windows integration Understanding of monitoring and logging technologies, particularly Prometheus, Grafana, and the ELK stack (Elasticsearch, Logstash, Kibana) Hands on experience with building infrastructure on cloud platforms (AWS, GCP) Familiarity with network protocols (DNS, NTP/PTP, DHCP, PXE) and a solid understanding of network routing fundamentals, including how traffic flows between offices and across the WAN

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Requirements Should possess 6-10 years of development experience, primarily in building products for large enterprises. Expertise with Java programming (collections framework, multi-threading, OOPS), preferably with the recent versions of Java. Expertise with Data Structures and Algorithms (solving DSA problems). Expertise with Spring, Springboot and Microservices. Expertise in databases, preferably on Oracle, SQL Server, or PostgreSQL. Expertise with event-driven architecture concepts and asynchronous messaging, especially with Kafka. Exceptional problem-solving skills. You should be able to solve problems on data structures, object modelling, system design, and even architecture based on your experience. Must have experience working with at least one cloud, preferably AWS. Good exposure to Elasticsearch would be good to have. Hands-on experience in building concurrent and/or parallelized, highly performant, scalable applications. This job was posted by Mansi Shah from Caizin.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer and Community Banking - Banking and Wealth Management Team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, equity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on experience with cloud-based applications, technologies and tools, deployment, monitoring and operations, such as Kubernetes, Prometheus, FluentD, Slack, Elasticsearch, Grafana, Kibana, etc. Relational and NoSQL databases; developing and managing operations leveraging key event streaming, messaging and DB services such as Cassandra, MQ/JMS/Kafka, Aurora, RDS, Cloud SQL, BigTable, DynamoDB, MongoDB, Cloud Spanner, Kinesis, Cloud Pub/Sub, etc. Networking (Security, Load Balancing, Network Routing Protocols, etc.) Demonstrated experience in the fields of production engineering and automation. Strong understanding of cloud technology standards and practices. Proficiency in utilizing tools for monitoring, analysis, and troubleshooting, including Splunk, Dynatrace, Datadog, or equivalent. Preferred Qualifications, Capabilities, And Skills Ability to conduct detailed analysis on incidents to identify patterns and trends, thereby enhancing operational stability and efficiency. Familiarity with digital certificate management and automation tools. Knowledge of frameworks such as CI/CD pipeline. Excellent communication and collaboration skills.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Work Your Magic with us! Ready to explore, break barriers, and discover more? We know you’ve got big plans – so do we! Our colleagues across the globe love innovating with science and technology to enrich people’s lives with our solutions in Healthcare, Life Science, and Electronics. Together, we dream big and are passionate about caring for our rich mix of people, customers, patients, and planet. That`s why we are always looking for curious minds that see themselves imagining the unimageable with us. Your Role Work as part of an Agile development team, taking ownership for one or more services Take ownership of features and epics, working as part of a team – ensuring excellence in design, code, test and operation A willingness to embrace change and foster a Growth and Learning culture on the team Mentoring team members through code review, design reviews Work with product owners to help refine the backlog, breaking down features and epics into executable stories Have a high quality software mindset – making sure that the code you write works Who You Are Bachelor’s/Master’s degree in computer science or equivalent. 5+ years of hands-on software engineering experience. Experience with Java, Spring, Spring Boot" Experience in unit test frameworks. Ability to provide solutions based on the business requirements. Ability to collaborate with cross-functional teams. Ability to work with global teams and a flexible work schedule. Must have excellent problem-solving skills and be customer-centric. Experience with Microservices, CI/CD, Event Oriented Architectures and Distributed Systems Experience with cloud environments (e.g., Google Cloud Platform, Azure, Amazon Web Services, etc.) Familiarity with web technologies (e,g,, JavaScript, HTML, CSS), data manipulation (e.g., SQL), and version control systems (e.g., GitHub. Familiarity with DevOps practices/principles, Agile/Scrum methodologies, CI/CD pipelines and the product development lifecycle. Familiarity with modern web APIs and full stack frameworks. Experience with Java, ElasticSearch, Kubernetes, Spring, Spring Boot. Experience developing eCommerce systems – especially B2B eCommerce – is a plus. What we offer: We are curious minds that come from a broad range of backgrounds, perspectives, and life experiences. We celebrate all dimensions of diversity and believe that it drives excellence and innovation, strengthening our ability to lead in science and technology. We are committed to creating access and opportunities for all to develop and grow at your own pace. Join us in building a culture of inclusion and belonging that impacts millions and empowers everyone to work their magic and champion human progress! Apply now and become a part of our diverse team!

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Gurgaon

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Introduction to team The Lodging Connectivity Organization at Expedia Group is seeking an experienced backend developer and a seasoned technologist with a passion for building robust, scalable, and high-performance applications that help millions of people plan their travels and create lifelong memories. Join us as we revolutionize the travel industry! The Lodging Connectivity Organization at Expedia Group is at the forefront of innovation in the travel industry. We leverage our cutting-edge, in-house platform to seamlessly integrate with lodging partners worldwide, delivering exceptional travel experiences to millions of users annually. Through APIs, we enable travelers to access top-tier content, enjoy an immersive hotel shopping experience, and add convenient ancillaries to their journeys with ease. We are part of the partner connectivity platform that serves as the backbone of Expedia’s business, powering an efficient two-way marketplace that bridges supply and demand. Leveraging diverse API integration that sources our platform with over 1M properties and generates more than 1M transactions per minute. Built with modern technologies like Java, Kotlin, Spring, DynamoDB, Elasticsearch or Redis, our ecosystem supports millions of travelers in finding the best travel products tailored to their needs. In this role, you will: Lead and actively participate in all phases of the software development lifecycle, including the design, analysis, development, and deployment efforts for multiple enterprise applications projects to tackle sophisticated business problems Deconstruct large, sophisticated problems into manageable and achievable breakthroughs driving streamlined execution by the team Provide technical guidance and mentoring to peers and more junior engineers, helping them overcome obstacles and have growth opportunities Collaborate with EG leaders with vision to architect and build robust applications and thoughtfully choose relevant technologies to evolve EG travel platform Maintain, monitor and improve our solutions and systems with a focus on service excellence; apply industry standards and new technologies to improve efficiency, quality, and system performance Technologies we use: Java, Kotlin, Node.js, GraphQL, Dropwizard, Spring, Docker, Kafka, Cassandra, MongoDB, DataDog, Splunk, AWS cloud Experience and qualifications: Bachelor’s or Master’s degree in Computer Science or related technical field or equivalent related professional experience. 8+ years of professional, post-college software development in object-oriented language Professional, post-college software development in object-oriented language A self-starter who is consistently looking to innovate and find ways to enhance our customers' experience, with ability to own solutions end to end Strong technical proficiency, capable of problem solving and applying critical thinking A deep understanding of software design principles, algorithms and data structures and commitment to technical excellence Strong communication skills and highly effective collaborator Able to articulate your ideas to teammates, peers, and leaders, providing details and supporting your ideas with data where applicable Incorporate others' input and feedback and strive to find common ground Enjoy and take pride in the work of your team. You focus on the team's success and have a strong desire to lead by example Experience building on cloud infrastructure (AWS preferred) with sound architectural patterns and scalable, cost-efficient production quality systems Embrace change, always open to replacing what you built yesterday with something better today Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 3 weeks ago

Apply

3.0 years

4 - 5 Lacs

Mohali

On-site

About Us: We as Primotech are as an IT Global Solutions company. We’ve been delivering software development and IT projects from several big enterprises to small and medium-sized businesses. We blend agile methodologies with technical expertise to take business to the next level. What We Are Looking for: We are seeking a highly skilled Senior Python Developer to join our dynamic team. The ideal candidate will have extensive experience in Python (3+) and the Flask framework, along with expertise in microservices architecture, cloud services, and database management. You will play a key role in designing and optimizing scalable, high-performance distributed systems. Key Responsibilities: · Develop, optimize, and maintain scalable web applications using Python and Flask. · Implement and manage asynchronous tasks using Celery and Redis. · Design, optimize, and maintain PostgreSQL databases, ensuring high performance and scalability. · Work with Elasticsearch to manage indices and enhance search functionality. · Deploy and manage applications on cloud platforms such as Google Cloud and Azure. · Utilize Docker for containerization, ensuring efficient application deployment and scalability. · Architect and develop distributed systems across multiple cloud environments. · Integrate Firebase for real-time database solutions and services. · Design and implement microservices-based architectures. · Collaborate with cross-functional teams to define and implement best practices for development, security and deployment. · Monitor and troubleshoot application performance and system reliability. Required Skills & Qualifications: · Strong proficiency in Python (3+ years) and Flask framework. · Hands-on experience with Celery & Redis for task management and message brokering. · Proficiency in PostgreSQL for efficient database management. · Experience with Elasticsearch for search indexing and performance optimization. · Strong expertise in cloud platforms like Google Cloud and Azure. · Knowledge of Docker for containerization and deployment. · Experience in designing and optimizing cross-database and cloud architecture. · Familiarity with Firebase for real-time data synchronization and cloud services. · Exposure to microservices architecture and best practices. · Experience with CI/CD pipelines and DevOps practices is a plus. Preferred Qualifications: · Experience with Kubernetes for container orchestration. · Knowledge of GraphQL and RESTful API design. · Familiarity with monitoring tools like Prometheus, Grafana, and ELK stack. · Understanding of software security best practices. Interested candidates may share their resume at rashi.malhotra@primotech.com Job Type: Full-time Pay: ₹35,000.00 - ₹45,000.00 per month Location Type: In-person Schedule: Day shift Monday to Friday Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Experience: Django: 3 years (Required) Python: 3 years (Required) Redis: 3 years (Preferred) Celery: 3 years (Preferred) Work Location: In person

Posted 3 weeks ago

Apply

0 years

5 - 6 Lacs

Noida

On-site

Posted On: 1 Aug 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description Java Lead Dev Position (Java + AWS Senior Java professionals with strong experience in java and AWS cloud platform related technologies (11+ yrs) Strong experience of designing / architecting enterprise grade projects in java and AWS based ecosystem Team handling experience with proven track record of delivering quality end products in production with seamless production support as needed Hands-on individual responsible for producing excellent quality of code, adhering to expected coding standards and industry best practices. Must have strong experience in Java 8, Multithreading, Springboot, Oracle/PostgreSql. Must have good knowledge on - Hibernate, Caching Frameworks, Memory Management AWS - Deployment (Docker and Kubernetes) + Common Services (mainly S3, Lambda, CloudFront, API Gateway, Cloud Formation and ALBs) Kafka, building event driven microservices and streaming applications Good to have MongoDB and ElasticSearch knowledge Excellent problem solving / trouble shooting skills High levels of ownership and commitment on deliverables Strong Communication Skills - Should be able to interact with clients stakeholders comfortably to probe a technical problem or provide a clear progress update or clarify requirement specifications for team/peers Mandatory Competencies Programming Language - Java - Core Java (java 8+) Programming Language - Java - Java Multithreading Middleware - Java Middleware - Springboot Database - PostgreSQL - PostgreSQL Beh - Communication Programming Language - Java - Hibernate Cloud - AWS - AWS S3, S3 glacier, AWS EBS Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate Cloud - AWS - Amazon API Gateway Middleware - Message Oriented Middleware - Messaging (JMS, ActiveMQ, RabitMQ, Kafka, SQS, ASB etc) DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies