Jobs
Interviews

35553 Kubernetes Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 years

1 - 10 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities: Manage and mentor a team of data engineers, fostering a culture of innovation and continuous improvement Design and maintain robust data architectures, including databases and data warehouses Oversee the development and optimization of data pipelines for efficient data processing Implement measures to ensure data integrity, including validation, cleansing, and governance practices Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver solutions Analyze, synthesize, and interpret data from a variety of data sources, investigating, reconciling and explaining data differences to understand the complete data lifecycle Architecting with modern technology stack and Designing Public Cloud Application leveraging in Azure Basic, structured, standard approach to work Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience. 12+ Years of Implementation experience on time-critical production projects following key software development practices 8+ years of programming experience in Python or any programming language 6+ years of hands-on programming experience in Spark using scala/python 4+ years of hands-on working experience with Azure services like: Azure Databricks Azure Data Factory Azure Functions Azure App Service Good knowledge in writing SQL queries Good knowledge in building REST API's Good knowledge on tools like Azure Dev Ops & Github Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Ability to learn modern technologies and be part of fast paced teams Proven excellent Analytical and Communication skills (Both Verbal and Written) Proficiency with AI-powered development tools such as GitHub Copilot or AWS Code Whisperer or Google’s Codey (Duet AI) or any relevant tools is expected. Candidates should be adept at integrating these tools into their workflows to accelerate development, improve code quality, and enhance delivery velocity. Expected to proactively leverage AI tools throughout the software development lifecycle to drive faster iteration, reduce manual effort, and boost overall engineering productivity Preferred Qualification: Good knowledge on Docker & Kubernetes services At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 17 hours ago

Apply

0 years

0 Lacs

Gurgaon

On-site

MongoDB's mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it's no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. MongoDB is growing rapidly and seeking a Software Engineer for the Data Platform team to be a key contributor to the overall internal data platform at MongoDB. The Data Platform team focuses on building reliable, flexible, and high quality data infrastructure such as a streaming platform, ML platform, and experimentation platform to enable all of MongoDB to utilize the power of data. As a Software Engineer, you will design and build a scalable data platform to help drive MongoDB's growth as a product and as a company, while also lending your technical expertise to other engineers as a mentor and trainer. You will tackle complex platform problems with the goal of making our platform more scalable, reliable, and robust. We are looking to speak to candidates who are based in Gurugram for our hybrid working model. Who you are You have worked on production-grade building applications and are capable of making backend improvements in languages such as Python and Go You have experience building tools and platforms for business users and software developers You have experience with or working knowledge of cloud platforms and services You enjoy working on full stack applications including UI/UX, API Design, databases and more You're looking for a high impact and growth role with variety of opportunities to drive adoption of data via services and tooling You're passionate about developing reliable and high quality software You're curious, collaborative and intellectually honest You're a great team player What you will do Design and build UI, API and other data platforms services for data users including but not limited to analysts, data scientists and software engineers Work closely with product design teams to make improvements to the internal data platform services with an emphasis on UI/UX Perform code reviews with peers and make recommendations on how to improve our code and software development processes Design boilerplate architecture that can abstract underlying data infrastructure from end users Further improve the team's testing and development processes Document and educate the larger team on best practices Help drive optimization, testing, and tooling to improve data platform quality Collaborate with other software engineers, machine learning experts, and stakeholders, taking learning and leadership opportunities that will arise every single day Bonus Points You have experience with modern Javascript environment including frontend frameworks such as React and Typescript You are familiar with data infrastructure and toolings such as Presto, Hive, Spark and BigQuery You are familiar with deployment and configuration tools such as Kubernetes, Drone, and Terraform You are interested in web design and have experience directly working with product designers You have experience designing and building microservices You have experience building a machine learning platform using tools like SparkML, Tensorflow, Seldon Core, etc. Success Measures In three months you will have familiarized yourself with much of our data platform services, be making regular contributions to our codebase, will be collaborating regularly with stakeholders to widen your knowledge, and helping to resolve incidents and respond to user requests. In six months you will have successfully investigated, scoped, executed, and documented a small to medium sized project and worked with stakeholders to make sure their user experiences are vastly enhanced by implementing improvements to our platform services. In a year you will have become the key person for several projects within the team and will have contributed to not only the data platform's roadmap but MongoDB's data-driven journey. You will have made several sizable contributions to the project and are regularly looking to improve the overall stability and scalability of the architecture. To drive the personal growth and business impact of our employees, we're committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees' wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it's like to work at MongoDB, and help us make an impact on the world! MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter. MongoDB is an equal opportunities employer. Requisition ID 2263175955

Posted 17 hours ago

Apply

7.0 - 9.0 years

0 Lacs

New Delhi, Delhi, India

On-site

The purpose of this role is to understand, model and facilitate change in a significant area of the business and technology portfolio either by line of business, geography or specific architecture domain whilst building the overall Architecture capability and knowledge base of the company. Job Description: Role Overview : We are seeking a highly skilled and motivated Cloud Data Engineering Manager to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. The GCP Data Engineering Manager will design, implement, and maintain scalable, reliable, and efficient data solutions on Google Cloud Platform (GCP). The role focuses on enabling data-driven decision-making by developing ETL/ELT pipelines, managing large-scale datasets, and optimizing data workflows. The ideal candidate is a proactive problem-solver with strong technical expertise in GCP, a passion for data engineering, and a commitment to delivering high-quality solutions aligned with business needs. Key Responsibilities : Data Engineering & Development : Design, build, and maintain scalable ETL/ELT pipelines for ingesting, processing, and transforming structured and unstructured data. Implement enterprise-level data solutions using GCP services such as BigQuery, Dataform, Cloud Storage, Dataflow, Cloud Functions, Cloud Pub/Sub, and Cloud Composer. Develop and optimize data architectures that support real-time and batch data processing. Build, optimize, and maintain CI/CD pipelines using tools like Jenkins, GitLab, or Google Cloud Build. Automate testing, integration, and deployment processes to ensure fast and reliable software delivery. Cloud Infrastructure Management : Manage and deploy GCP infrastructure components to enable seamless data workflows. Ensure data solutions are robust, scalable, and cost-effective, leveraging GCP best practices. Infrastructure Automation and Management: Design, deploy, and maintain scalable and secure infrastructure on GCP. Implement Infrastructure as Code (IaC) using tools like Terraform. Manage Kubernetes clusters (GKE) for containerized workloads. Collaboration and Stakeholder Engagement : Work closely with cross-functional teams, including data analysts, data scientists, DevOps, and business stakeholders, to deliver data projects aligned with business goals. Translate business requirements into scalable, technical solutions while collaborating with team members to ensure successful implementation. Quality Assurance & Optimization : Implement best practices for data governance, security, and privacy, ensuring compliance with organizational policies and regulations. Conduct thorough quality assurance, including testing and validation, to ensure the accuracy and reliability of data pipelines. Monitor and optimize pipeline performance to meet SLAs and minimize operational costs. Qualifications and Certifications : Education: Bachelor’s or master’s degree in computer science, Information Technology, Engineering, or a related field. Experience: Minimum of 7 to 9 years of experience in data engineering, with at least 4 years working on GCP cloud platforms. Proven experience designing and implementing data workflows using GCP services like BigQuery, Dataform Cloud Dataflow, Cloud Pub/Sub, and Cloud Composer. Certifications: Google Cloud Professional Data Engineer certification preferred. Key Skills : Mandatory Skills: Advanced proficiency in Python for data pipelines and automation. Strong SQL skills for querying, transforming, and analyzing large datasets. Strong hands-on experience with GCP services, including Cloud Storage, Dataflow, Cloud Pub/Sub, Cloud SQL, BigQuery, Dataform, Compute Engine and Kubernetes Engine (GKE). Hands-on experience with CI/CD tools such as Jenkins, GitHub or Bitbucket. Proficiency in Docker, Kubernetes, Terraform or Ansible for containerization, orchestration, and infrastructure as code (IaC) Familiarity with workflow orchestration tools like Apache Airflow or Cloud Composer Strong understanding of Agile/Scrum methodologies Nice-to-Have Skills: Experience with other cloud platforms like AWS or Azure. Knowledge of data visualization tools (e.g., Power BI, Looker, Tableau). Understanding of machine learning workflows and their integration with data pipelines. Soft Skills : Strong problem-solving and critical-thinking abilities. Excellent communication skills to collaborate with technical and non-technical stakeholders. Proactive attitude towards innovation and learning. Ability to work independently and as part of a collaborative team. Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 17 hours ago

Apply

5.0 years

9 - 13 Lacs

Gurgaon

On-site

Job Title: Senior Software Engineer – AI/ML (Tech Lead) Experience: 5+ Years Location: Gurugram Notice Period: Immediate Joiners Only Roles & Responsibilities Design, develop, and deploy robust, scalable AI/ML-driven products and features across diverse business verticals. Provide technical leadership and mentorship to a team of engineers, ensuring delivery excellence and skill development. Drive end-to-end execution of projects — from architecture and coding to testing, deployment, and post-release support. Collaborate cross-functionally with Product, Data, and Design teams to align technology efforts with product strategy. Build and maintain ML infrastructure and model pipelines , ensuring performance, versioning, and reproducibility. Lead and manage engineering operations — including monitoring, incident response, logging, performance tuning, and uptime SLAs. Take ownership of CI/CD pipelines , DevOps processes, and release cycles to support rapid, reliable deployments. Conduct code reviews , enforce engineering best practices, and manage team deliverables and timelines. Proactively identify bottlenecks or gaps in engineering or operations and implement process improvements. Stay current with trends in AI/ML, cloud technologies, and MLOps to continuously elevate team capabilities and product quality. Tools & Platforms Languages & Frameworks: Python, FastAPI, PyTorch, TensorFlow, Hugging Face Transformers MLOps & Infrastructure: MLflow, DVC, Airflow, Docker, Kubernetes, Terraform, AWS/GCP CI/CD & DevOps: GitHub, GitLab CI/CD, Jenkins Monitoring & Logging: Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Sentry Project & Team Management: Jira, Notion, Confluence Analytics: Mixpanel, Google Analytics Collaboration & Prototyping: Slack, Figma, Miro Job Type: Full-time Pay: ₹900,000.00 - ₹1,300,000.00 per year Application Question(s): Total years of experience you have in developing AI/ML based tools ? Total years of experience you have in developing AI/ML projects ? Total years of experience you have in Handling team ? Current CTC? Expected CTC? In how many days you can join us if gets shortlisted? Current Location ? Are you ok to work from office (Gurugram , sector 54)? Rate your English communication skills out of 10 (1 is lowest and 10 is highest)? Please mention your all tech skills which makes you a fit for this role ? Have you gone through the JD and ok to perform all roles and responsibilities ? Work Location: In person

Posted 17 hours ago

Apply

3.0 years

20 - 25 Lacs

Gurgaon

Remote

About Us: Sun King (Greenlight Planet) is a multinational, for-profit business that designs, distributes, and finances solar-powered home energy products, with an underserved population in mind: the 1.8 billion global consumers for whom the old-fashioned electrical grid is either unavailable or too expensive. Over a decade in business, the company is now a leading global brand in emerging markets across Asia and Sub-Saharan Africa. Greenlight’s Sun King™ products provide modern light and energy to 32 million people in more than 60 countries and have sold over 8 million products worldwide. From the company’s wide range of trusted Sun King™ solar lamps and home energy systems, to its innovative distribution partnerships, to its EasyBuy™ pay-as-you-go consumer financing model, Greenlight Planet continuously strives to meet the evolving needs of the off-grid market. Greenlight stays in touch with underserved consumers’ needs in part by operating its own direct- to-consumer sales network, including thousands of trusted sales agents (called as “Sun King Energy Officers”) in local communities across local communities. For Sun King Energy Officers, this is not only a good source of income and employment but also they become an important member of their community bring light and catering to local energy needs within their communities. Today, with over 2700+ full-time employees in 15 countries, we remain continuously impressed at how each new team member contributes unique and innovative solutions to the global off-grid challenge, from new product designs, to innovative sales and distribution strategies, to setting up better collection mechanisms, to better training strategies, to more efficient logistical and after- sales service systems. We listen closely to each other to improve our products, our service, and ultimately, the lives of underserved consumers. Job location: Gurugram (Hybrid) About the role: Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments. What you would be expected to do: Work with engineering, automation, and data teams to work on various infrastructure requirements. Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform. Managing AWS services for multiple teams. Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services. Deployment and management of Kubernetes resources. Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution. Set up incident response services and design effective processes. Deployment and management of critical platform services like OPA and Keycloak for IAM. Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines. You might be a strong candidate if you have/are: Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks. Experience working with web servers (nginx, apache) and cloud providers (preferably AWS). Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments. Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters). Knowledge of web architecture, distributed systems, and single points of failure. Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck. Good networking fundamentals — SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls. Good to have: Experience with backend development and setting up databases and performance tuning using parameter groups. Working experience in Kubernetes cluster administration and Kubernetes deployments. Experience working alongside sec ops engineers. Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing. Setup and usage of open telemetry, central logging, and monitoring systems. Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,500,000.00 per year Benefits: Cell phone reimbursement Flexible schedule Health insurance Internet reimbursement Provident Fund Work from home Application Question(s): What's your expected CTC? What's your notice period? What's your current CTC? Experience: AWS: 3 years (Required) Linux: 3 years (Required) Python: 2 years (Required) Work Location: In person

Posted 17 hours ago

Apply

7.0 years

3 - 6 Lacs

Gurgaon

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. System Administrator - AI/ML Platform: We are looking for a detail-oriented and technically proficient AI/ML Cloud Platform Administrator to manage, monitor, and secure our cloud-based platforms supporting machine learning and data science workloads. This role requires deep familiarity with both AWS and Azure cloud services, and strong experience in platform configuration, resource provisioning, access management, and operational automation. You will work closely with data scientists, MLOps engineers, and cloud security teams to ensure high availability, compliance, and performance of our AI/ML platforms. Your responsibilities will include: Provision, configure, and maintain ML infrastructure on AWS (e.g., SageMaker, Bedrock, EKS, EC2, S3) and Azure (e.g., Azure Foundry, Azure ML, AKS, ADF, Blob Storage) Manage cloud resources (VMs, containers, networking, storage) to support distributed ML workflows Deploy and Manage the open source orchestration ML Frameworks such as LangChain and LangGraph Implement RBAC, IAM policies, Azure AD, and Key Vault configurations to manage secure access. Monitor security events, handle vulnerabilities, and ensure data encryption and compliance (e.g., ISO, HIPAA, GDPR) Monitor and optimize performance of ML services, containers, and jobs Set up observability stacks using Fiddler , CloudWatch, Azure Monitor, Grafana, Prometheus, or ELK . Manage and troubleshoot issues related to container orchestration (Docker, Kubernetes – EKS/AKS) Use Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Bicep to automate environment provisioning Collaborate with MLOps teams to automate deployment pipelines and model operationalization Implement lifecycle policies, quotas, and data backups for storage optimization Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in cloud administration, with 2+ years supporting AI/ML or data platforms Proven hands-on experience with both AWS or Azure Proficient in Terraform, Docker, Kubernetes (AKS/EKS), Git, Python or Bash scripting Security Practices: IAM, RBAC, encryption standards, VPC/network setup Requisition ID: 611331 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 17 hours ago

Apply

6.0 years

1 - 1 Lacs

Delhi

Remote

Position- Senior Developer Analyst Budget- 1.5 Experience- 6+ years Salary range : 1,00,000-1,20,000 Remote About the Role: We are seeking a highly skilled Senior Developer Analyst with strong experience in payment processing systems to join our growing technology team. This role involves both technical development and analytical responsibilities, focusing on designing, implementing, and maintaining secure, high-performance payment solutions. You will collaborate with product managers, architects, QA, and other developers to deliver mission-critical features for payment gateways, merchant services, and transaction processing engines. Key Responsibilities: Analyze business requirements related to payment processing and translate them into technical designs and development plans Design and develop scalable, secure, and high-performance payment modules and integrations Integrate with third-party payment processors (e.g., Stripe, PayPal, Adyen, Worldpay, Razorpay, etc.) Ensure compliance with PCI-DSS and other regulatory requirements in all payment-related workflows Monitor, debug, and improve existing payment flows and handle exception/error scenarios effectively Collaborate with cross-functional teams including Product, QA, DevOps, and Support Write clean, maintainable, and testable code with appropriate documentation Conduct code reviews and mentor junior team members Analyze payment transaction data to identify trends, issues, and opportunities for improvement Support and optimize recurring billing, refunds, chargebacks, and fraud detection processes Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field 5+ years of experience in software development, with at least 3 years in payment processing systems Proficiency in programming languages such as Java, Python, Node.js, or C# Strong understanding of REST APIs, webhooks, and message queues Experience with relational databases (e.g., PostgreSQL, MySQL) and/or NoSQL databases Familiarity with PCI-DSS compliance, tokenization, and data encryption best practices Deep understanding of payment lifecycle, transaction statuses, settlement, and reconciliation Experience integrating with payment gateways, acquirers, or processors Strong debugging and problem-solving skills Excellent communication and analytical thinking Preferred Qualifications: Experience with fraud prevention tools (e.g., Riskified, Sift, etc.) Knowledge of digital wallets, UPI, BNPL, and international payment protocols Familiarity with microservices architecture and cloud environments (e.g., AWS, Azure, GCP) Exposure to DevOps tools (Docker, Kubernetes, CI/CD pipelines) Job Types: Full-time, Permanent Pay: ₹100,000.00 - ₹120,000.00 per year Work Location: In person

Posted 17 hours ago

Apply

0 years

6 - 12 Lacs

Delhi

Remote

Position- SRE Developer Exp- 10 +yrs Location- Remote Budget- 1.20 LPM Salary range : 1,00,000 INR Duration - 6 months ( C2C) JD: Technical Skills: Programming: Proficiency in languages like Python, Bash, or Java is essential. Operating Systems: Deep understanding of Linux/Windows operating systems and networking concepts. Cloud Technologies: Experience with AWS & Azure including services, architecture, and best practices. Containerization and Orchestration: Hands-on experience with Docker, Kubernetes, and related tools. Infrastructure as Code (IaC): Familiarity with tools like Terraform, CloudFormation or Azure CLI. Monitoring and Observability: Experience with tools like Splunk, New Relic or Azure Monitoring. CI/CD: Experience with continuous integration and continuous delivery pipelines, GitHub, GitHub Actions. Knowledge in supporting Azure ML, Databricks and other related SAAS tools. Preferred Qualifications: Experience with specific cloud platforms (AWS, Azure). Certifications related to cloud engineering or DevOps. Experience with microservices architecture including supporting AI/ML solutions. Experience with large-scale system management and configuration. Job Type: Full-time Pay: ₹50,000.00 - ₹100,000.00 per month Work Location: In person

Posted 17 hours ago

Apply

0 years

8 - 10 Lacs

Delhi Cantonment

Remote

ABOUT TIDE At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. ABOUT THE TEAM Our 40+ engineering teams are working on designing, creating and running the rich product catalogue across our business and enablement areas (e.g. Payments Services, Admin Services, Ongoing Monitoring, etc.). We have a long roadmap ahead of us and always have interesting problems to tackle. We trust and empower our engineers to make real technical decisions that affect multiple teams and shape the future of Tide's Global One Platform. We work in small autonomous teams, grouped under common domains owning the full lifecycle of products and microservices in Tide's service catalogue. Our engineers self-organize, gather together to discuss technical challenges, and set their own guidelines in the different Communities of Practice regardless of where they currently stand in our Growth Framework. ABOUT THE ROLE As a Full Stack Engineer at Tide, you will be a key contributor to our engineering teams, working on designing, creating, and running the rich product catalogue across our business and enablement areas. You will have the opportunity to make a real difference by taking ownership of engineering practices and contributing to our event-driven Microservice Architecture, which currently consists of over 200 services owned by more than 40 teams. Design, build, run, and scale the services your team owns globally. You will define and maintain the services your team owns (you design it, you build it, you run it, you scale it globally) Work on both new and existing products, tackling interesting and complex problems. Collaborate closely with Product Owners to translate user needs, business opportunities, and regulatory requirements into well-engineered solutions. Define and maintain the services your team owns, exposing and consuming RESTful APIs with a focus on good API design. Learn and share knowledge with fellow engineers, as we believe in experimentation and collaborative learning for career growth. Have the opportunity to join our Backend and Web Community of Practices, where your input on improving processes and maintaining high quality will be valued. WHAT ARE WE LOOKING FOR A sound knowledge of a backend framework such as Spring/Spring Boot, with experience in writing microservices that expose and consume RESTful APIs. While Java experience is not mandatory, a willingness to learn is essential as most of our services are written in Java. Experience in engineering scalable and reliable solutions in a cloud-native environment, with a strong understanding of CI/CD fundamentals and practical Agile methodologies. Have some experience in web development, with a proven track record of building server-side applications, and detailed knowledge of the relevant programming languages for your stack. Strong knowledge of Semantic HTML, CSS3, and JavaScript (ES6). Solid experience with Angular 2+, RxJS, and NgRx. A passion for building great products in small, autonomous, agile teams. Experience building sleek, high-performance user interfaces and complex web applications that have been successfully shipped to customers. A mindset of delivering secure, well-tested, and well-documented software that integrates with various third-party providers. Solid experience using testing tools such as Jest, Cypress, or similar. A passion for automation tests and experience writing testable code. OUR TECH STACK Java 17 , Spring Boot and JOOQ to build the RESTful APIs of our microservices Event-driven architecture with messages over SNS+SQS and Kafka to make them reliable Primary datastores are MySQL and PostgreSQL via RDS or Aurora (we are heavy AWS users) Angular 15+ (including NgRx and Angular Material) Nrwl Nx to manage them as mono repo Storybook as live components documentation Node.js, NestJs and PostgreSQL to power up the BFF middleware Contentful to provide some dynamic content to the apps Docker, Terraform, EKS/Kubernetes used by the Cloud team to run the platform DataDog, ElasticSearch/Fluentd/Kibana, Semgrep, LaunchDarkly, and Segment to help us safely track, monitor and deploy GitHub with GitHub actions for Sonarcloud, Snyk and solid JUnit/Pact testing to power the CI/CD pipelines WHAT YOU WILL GET IN RETURN Make work, work for you! We are embracing new ways of working and support flexible working arrangements. With our Working Out of Office (WOO) policy our colleagues can work remotely from home or anywhere in their home country. Additionally, you can work from a different country for up to 90 days a year. Plus, you'll get: Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 25 Annual leaves Family & Friendly Leaves TIDEAN WAYS OF WORKING At Tide, we're Member First and Data Driven, but above all, we're One Team. Our Working Out of Office (WOO) policy allows you to work from anywhere in the world for up to 90 days a year. We are remote first, but when you do want to meet new people, collaborate with your team or simply hang out with your colleagues, our offices are always available and equipped to the highest standard. We offer flexible working hours and trust our employees to do their work well, at times that suit them and their team. TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity status or disability status. We believe it's what makes us awesome at solving problems! We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. Tide Website: https://www.tide.co/en-in/ Tide LinkedIn: https://www.linkedin.com/company/tide-banking/mycompany/ TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members' diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone's voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .

Posted 17 hours ago

Apply

0 years

0 Lacs

India

Remote

Company : Mindware Infotech (A Startup Venture from Mindware) Location : Dwarka, New Delhi Job Type : Full-time, On-site Working Hours : 10:00 AM - 7:00 PM (Monday to Saturday) Work Model : On-site only (No Work from Home) About Us Mindware Infotech, a dynamic startup venture backed by Mindware, a trusted name in barcode, RFID, and web solutions for over two decades, is building innovative software and cloud solutions. We are seeking passionate DevOps Developers, including freshers, with expertise in cloud hosting to join our talented, diverse team in Dwarka, New Delhi. Our mission is to deliver cutting-edge Point of Sales, Job Portals, Dating Apps, Warehouse Management, and RFID/IoT solutions. If you are driven by a passion for cloud infrastructure and thrive in a collaborative, fast-paced environment, we invite you to contribute to our vision. Job Summary We are looking for a motivated DevOps Developer with a strong interest in cloud hosting on DigitalOcean and AWS to design, implement, and manage robust cloud infrastructure. This role is critical to ensuring the scalability, security, and performance of our applications. The ideal candidate, including freshers, will have a solid understanding of DevOps practices, cloud architecture, automation, and debugging, with an eagerness to learn and contribute to managing hosting control panels and executing cloud deployments. Key Responsibilities Assist in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure on DigitalOcean and AWS. Support automation of infrastructure provisioning, configuration, and deployment processes using tools like Terraform, Ansible, or similar. Contribute to building and maintaining CI/CD pipelines to streamline application deployment and updates. Participate in optimizing cloud environments for performance, cost-efficiency, and reliability, including load balancing, auto-scaling, and monitoring. Assist in migrations of applications and websites from legacy systems to modern cloud platforms (DigitalOcean/AWS). Monitor and maintain cloud infrastructure, ensuring uptime, security, and compliance with best practices. Debug and resolve infrastructure, deployment, and application issues with guidance from senior team members. Collaborate with development teams to integrate DevOps practices into the software development lifecycle. Manage hosting control panels and server configurations to support web applications and databases. Stay updated on emerging cloud technologies and contribute ideas for improving existing infrastructure. Qualifications Mandatory Prerequisite : Proven knowledge and hands-on experience with DigitalOcean or Amazon Web Services (AWS) for hosting, managing, and debugging applications and websites on the cloud. Resumes without this expertise will not be considered. Open to freshers with a Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent coursework/projects). Strong knowledge of cloud platforms (AWS, DigitalOcean), including services like EC2, S3, RDS, Lambda, Droplets, or Spaces. Familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation) or configuration management tools (e.g., Ansible, Chef, Puppet) is a plus. Knowledge of scripting languages such as Python, Bash, or PowerShell for automation is desirable. Familiarity with containerization and orchestration tools like Docker or Kubernetes is an advantage. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions is a plus. Knowledge of database management (PostgreSQL, MySQL) and networking concepts (VPC, DNS, load balancing). Strong problem-solving skills, with the ability to debug cloud infrastructure and application issues under guidance. Good communication skills and a collaborative mindset to work in a team environment. Why Join Us? Innovative Environment : Work on groundbreaking projects in a startup backed by the stability of Mindware. Professional Growth : Access continuous learning opportunities and mentorship to kickstart or advance your career in cloud and DevOps. Collaborative Culture : Join a diverse team of professionals from across India, united by a shared passion for innovation and excellence. Walk-in Interview Details We are conducting walk-in interviews for the DevOps Developer position. Bring your resume and a passion for cloud innovation. Location : Mindware Infotech, Dwarka, New Delhi Interview Dates : Monday, August 11, 2025, to Wednesday, August 13, 2025 Interview Time : 10:00 AM - 1:00 PM How to Apply Attend our walk-in interviews with your updated resume highlighting your DigitalOcean or AWS expertise in hosting, managing, and debugging. For inquiries, contact: Email : gulshanmarwah@indianbarcode.com WhatsApp : Varsha at 8527522688 Join Mindware Infotech and help shape the future of cloud-hosted solutions! Job Types: Full-time, Permanent, Fresher Schedule: Day shift Work Location: In person

Posted 17 hours ago

Apply

10.0 years

3 - 3 Lacs

Noida

On-site

We’re Hiring Project Manage r Noida 10–15 years Up to ₹35 LPA Are you a tech visionary with strong leadership skills and deep full-stack expertise? Join our growing team that’s transforming industries with QR code and data analytics solutions focused on IoT, Blockchain, and Supply Chain Intelligence. What You'll Do: Lead and mentor a high-performing dev team (10–15 developers) Oversee multiple full-stack projects with cutting-edge tech Manage project delivery, client interactions, and deployment cycles Ensure code quality, scalability, security, and performance Bridge the gap between business needs and tech solutions Your Tech Toolkit: Frontend: React.js / Angular / Vue.js, HTML/CSS, Tailwind, Redux Backend: Node.js / Python / Java / .NET / PHP, REST & GraphQL APIs Databases: MySQL, PostgreSQL, MongoDB, Redis DevOps: Docker, Kubernetes, Jenkins, GitHub Actions, AWS/Azure Specialized: IoT & Blockchain API integration, regulatory compliance (DSCSA/GDPR) What We’re Looking For: 10+ years of full-stack development experience. 6+ years in technical team leadership Expertise in delivering scalable, secure enterprise applications Strong problem-solving and communication skills Perks: Work on transformative supply chain projects Growth & upskilling opportunities Collaborative and tech-first culture If you're ready to lead the future of tech-driven supply chains — Apply now. SR HR UMA +91 9920571936 Job Types: Full-time, Permanent Pay: ₹340,000.00 - ₹350,000.00 per year Benefits: Paid sick time Paid time off Provident Fund Work Location: In person Speak with the employer +91 9920571936

Posted 17 hours ago

Apply

5.0 years

19 - 39 Lacs

Noida

Remote

Sr Site Reliability Engineer (Location:- Bengaluru- India) RACE Consulting is hiring on behalf of one of our esteemed clients!We're looking for a highly skilled SRE professional with deep expertise in modern DevOps tools like Terraform, GitLab, Grafana, and Helm. If you're a self-starter with a strong background in cloud infrastructure, monitoring (Dynatrace), CI/CD, and Python, this could be your next big move. Experience: 5+ years* Location: Bengaluru/ Remote (India)* Key Skills: Terraform, GitLab, Dynatrace, Kubernetes, Python, Helm, Docker Job Type: Full-time Pay: ₹1,950,000.00 - ₹3,900,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Work Location: In person

Posted 17 hours ago

Apply

0 years

4 - 7 Lacs

Noida

On-site

Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Lead Consultant – Python developer with GCP and Kubernetes The ideal candidate will have experience working with GCP and have proficiency in Python development. You will play a key role in designing and developing, ensuring quality and integrity in design & development. Responsibilities Demonstrate proficiency as a Python framework developer. Design and develop robust Python frameworks, leveraging experience with Google Cloud Platform (GCP) for cloud-based solutions Apply expertise in Kubernetes for container orchestration and management Exhibit excellent communication skills for effective collaboration Qualifications we seek in you! Minimum Qualifications Experience with Python development, including designing frameworks and using Pandas Proficiency with GCP services and tools for cloud-based applications Expertise in Kubernetes for efficient container management Hands-on experience in an agile development environment Strong problem-solving skills and ability to troubleshoot complex issues Excellent communication skills and ability to work effectively in a fast-paced, team-oriented environment Preferred Qualifications/ Skills Possess excellent analytical and problem-solving skills, with keen attention to detail Demonstrate effective communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams and stakeholders Prior experience in a consulting role or client-facing environment is highly desirable Advanced knowledge of Python, GCP, and Kubernetes to drive innovative solutions Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career—Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Noida Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Aug 6, 2025, 6:31:43 PM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time

Posted 17 hours ago

Apply

5.0 years

2 - 8 Lacs

Noida

On-site

Posted On: 6 Aug 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description 5+ years of enterprise test engineering (both manual and automation) Proficiency in Java, Kotlin or another similar programming language, Python basic underatnding is preferable. Good knowledge with Dev Ops, Jenkins , Kubernetes , Docker along with cloud – AWS, Azure Familiar with tools for API testing i.e Postman , Rest Assured , httpclient Familiar with tools for UI testing , i.e Selenium, Serenity, Cypress & similar tools Familiar with project tracking software such as JIRA and testing platforms such -Xray Broad understanding of computer science and Quality engineering principles Good understanding on Linux, Unix based system and shell scripting Proven track record of delivering test automation for highly complex software systems Experience planning for and executing end-to-end functional and non-functional tests Good communication skills Strong problem solving and analytical skills Comfortable and able to work under pressure Mandatory Competencies QA/QE - QA Automation - Core Java QA/QE - QA Automation - Framework creation for testing QA/QE - QA Automation - Python Beh - Communication QA/QE - QA Manual - API Testing Development Tools and Management - Development Tools and Management - Postman QA/QE - QA Automation - Rest Assured Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 17 hours ago

Apply

3.0 years

3 - 4 Lacs

Lucknow

On-site

Backend Developer – Location: Lucknow Salary: ₹30,000 – ₹40,000 per month Experience: 3–5 years Job Type: Full-Time Qualifications: - B.Tech / BCA / MCA in Computer Science, IT, or a relevant field - Proven experience in backend development with Python and/or Node.js Required Skills: - Strong proficiency in Python (Django, Flask, FastAPI) or Node.js - Database expertise: PostgreSQL, MongoDB, Redis - RESTful API design and integration - Version control systems (Git) - Experience with containerized environments (Docker, Kubernetes is a plus) - Knowledge of cloud platforms (AWS/GCP/Azure is an advantage) - Understanding of security and data protection - Writing scalable, reusable, testable, and efficient code Responsibilities: - Design, implement, and maintain server-side logic - Develop and maintain APIs and microservices - Collaborate with frontend developers and DevOps teams to integrate systems - Optimize applications for speed and scalability - Troubleshoot and debug applications - Implement data storage solutions and manage database performance Job Type: Full-time Pay: ₹30,000.00 - ₹40,000.00 per month Work Location: In person Speak with the employer +91 8143775047

Posted 17 hours ago

Apply

2.0 - 3.0 years

4 - 6 Lacs

Noida

On-site

Join our Team About this opportunity: Join Ericsson as an Oracle Database Administrator and play a key role in managing and optimizing our critical database infrastructure. As an Oracle DBA, you will be responsible for installing, configuring, Upgrading and maintaining Oracle databases, ensuring high availability, performance, and security. You’ll work closely with cross-functional teams to support business-critical applications, troubleshoot issues, and implement database upgrades and patches. This role offers a dynamic and collaborative environment where you can leverage your expertise to drive automation, improve efficiency, and contribute to innovative database solutions. What you will do: Oracle, PostgreSQL, MySQL, and/or MariaDB database administration in production environments. Experience with Container Databases (CDBs) and Pluggable Databases (PDBs) for better resource utilization and simplified management. High availability configuration using Oracle Dataguard, PostgreSQL, MySQL replication, and/or MariaDB Galera clusters. Oracle Enterprise Manager administration which includes alarm integration. Familiarity with Linux tooling such as iotop, vmstat, nmap, OpenSSL, grep, ping, find, df, ssh, and dnf. Familiarity with Oracle SQL Developer, Oracle Data Modeler, pgadmin, toad, PHP, MyAdmin, and MySQL Workbench is a plus. Familiarity with NoSQL, such as MongoDB is a plus. Knowledge of Middle ware like Golden-gate both oracle to oracle and oracle to BigData. Oracle, PostgreSQL, MySQL, and/or MariaDB database administration in production environments. Conduct detailed performance analysis and fine-tuning of SQL queries and stored procedures. Analyze AWR, ADDMreports to identify and resolve performance bottlenecks. Implement and manage backup strategies using RMAN and other industry-standard tools. Performing pre-patch validation using opatch and datapatch. Testing patches in a non-production environment to identify potential issues before applying to production. Apply Oracle quarterly patches and security updates. Implement and manage backup strategies using RMAN and other industry-standard tools. The skills you bring: Bachelor of Engineering or equivalent experience with at least 2 to 3 years in the field of IT. Must have experience in handling operations in any customer service delivery organization. Thorough understanding of basic framework of Telecom / IT processes. Willingness to work in a 24x7 operational environment with rotating shifts, including weekends and holidays, to support critical infra and ensure minimal downtime. Strong understanding of Linux systems and networking fundamentals. Knowledge of cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) is a plus. Oracle Certified Professional (OCP) is preferred Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 770689

Posted 17 hours ago

Apply

4.0 years

19 - 39 Lacs

Noida

On-site

Sr Software Engineer (3 Openings) RACE Consulting is hiring for one of our top clients in the cybersecurity and AI space. If you're passionate about cutting-edge technology and ready to work on next-gen AI-powered log management and security automation, we want to hear from you! Role Highlights:Work on advanced agentic workflows, threat detection, and behavioral analysis Collaborate with a world-class team of security researchers and data scientists Tech stack: Scala, Python, Java, Go, Docker, Kubernetes, IaC Who We're Looking For:4+ years of experience in backend developmentStrong knowledge of microservices, containerization, and cloud-native architectureBonus if you’ve worked in cybersecurity or AI-driven analytics Job Type: Full-time Pay: ₹1,950,000.00 - ₹3,900,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Work Location: In person

Posted 17 hours ago

Apply

25.0 years

55 Lacs

Ahmedabad

On-site

Join Our Team at Litera: Where Legal Technology Meets Excellence Litera has been at the forefront of legal technology innovation for over 25 years, crafting legal software to amplify impact and maximize efficiency. Developed by the best legal minds in the industry, our comprehensive suite of integrated legal tools is both powerful and user-friendly and simplifies the way modern firms manage core legal workflows, secure collaboration, and organize firm knowledge and experience. Every day, we help more than 2.3 million legal professionals focus on their craft. Litera: Less busy work, more of your life’s work. The Opportunity: We seek a highly skilled and experienced Sr. Software Architect who will demonstrate technical leadership and strong communication and presentation skills to join our dynamic and innovative technology team. As a member of the team, you will be responsible for contributing to architectural designs and implementing technology solutions at Litera. This is a unique opportunity to contribute to technical strategy and architectural decisions within a growing and ever-changing ecosystem. The role works closely with both the Engineering and Product teams and spans both products and the underlying technology platform. Our customers rely on us to deliver innovative, strategic and forward-looking solutions. This is your chance to be a key member of a global software company. Responsibilities Develop innovative architectural solutions in a wide variety of problem sets and domains. Create and share architectural designs, best practices and technology roadmaps with cross-functional teams. Collaborate with senior architects to implement business and product requirements into technical solutions and software architectures. Focus on non-functional requirements, including deployment needs, scalability, performance and reliability when developing architectures and best practices. Develop "proof of concept" solutions to help demonstrate and communicate technical designs and desired architectures. Develop architecture and guidelines to move on-perm systems to SAAS based multi-tenant solutions on Azure Cloud. Support adoption of new technology within assigned teams and projects by delivering Proof of Concepts with supporting design diagrams, technical documentation, business impact analysis and ROI. Optimize cloud-based systems for high availability, fault tolerance, and disaster recovery. Mentor junior developers and contribute to team knowledge sharing to support common platform architecture goals. Qualifications 8-10 years of software development experience with excellent .NET/C#, React and Typescript coding skills. 5-7 years of experience working as a Software Architect. 3-5 years of experience with Microsoft Azure and/or AWS cloud-native architectures. Outstanding communication and teamwork skills, with a proven ability to collaborate with, and influence others. BS degree in Computer Science, Computer Information Systems, or Engineering (or related experience). Solid understanding of distributed systems architecture and microservices. Experience with building highly scalable and resilient cloud services. Experience developing reference architectures and proof of concept prototypes. Experience implementing modern security solutions including token-based authentication, OAuth 2.0 workflows, SAML authentication and authorization techniques. Preferable experience: Azure OpenAI/GPT/LLMs, Azure Kubernetes Service, Azure Service Bus, Azure Storage, Azure SQL, Azure CosmosDB, Lucene/Elasticsearch, Azure DevOps, CI/CD. Preferable certifications: Microsoft or other industry certifications in architecture a plus. Clearance of standard background check prior to employment with candidate consent. Career Progression Timeline Within 1 month, you will: Learn the functional areas of the products and intended uses. Establish relationships with key members of the leadership and architecture teams. Participate and contribute to assigned work activities and meetings (planning, daily standups, etc.). Acclimate to the environment and begin to gain insights into the technology and innovation opportunities that exist in the company. Contribute thoughts, ideas, relevant expertise in at least one key strategic initiative. Goal: expand to multiple initiatives over time. Review and learn key architecture team artifacts and ways of working – Product Inventory, Reference Architecture. Within 3 months, you will: Contribute reference architecture and proof of concept improvements regularly. Develop subject matter expertise in more than one product area. Contribute to translating product vision into technical requirements and designs. Support architectural planning initiatives. Active technical contribution – sharing ideas and architectural insights within your project teams. Within 6 months, you will: Collaborate with other development team members to build robust architectures and product integrations. Identify and propose opportunities for product and technology improvement. Be a key technical resource for initiatives you are involved in. Contribute technical expertise and support decision-making. Why Join Litera? The company culture: We emphasize helping each other grow, doing the right thing always, and being part of a journey to amplify impact, creating an exciting and fulfilling work environment Commitment to Employees : Our people commitment is based on what employees love most about being part of the team, focusing on tools that matter to the difference-makers in the legal world and amplifying their impact Global, Dynamic, and Diverse Team : Our is a global company with ambitious goals and unlimited opportunities, offering a dynamic and diverse work environment where employees can grow, listen, empathize, and problem-solve together Comprehensive Benefits Package: Experience peace of mind with our health insurance, retirement savings plans, generous paid time off, and a supportive work-life balance. We invest in your well-being and future, ensuring a rewarding career journey. Career Growth and Development : We provide career paths and opportunities for professional development, allowing employees to progress through various technical and leadership roles Job Type: Full-time Pay: From ₹5,500,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Expected Start Date: 30/08/2025

Posted 17 hours ago

Apply

5.0 years

4 - 5 Lacs

Ahmedabad

On-site

Position - 01 Job Location - Ahmedabad Qualification - Any Graduate Years of Exp - 5+ years About us Bytes Technolab is a full-range web application Development Company, establishedin the year 2011, having international presence in the USA and Australia and India. Bytes exhibiting excellent craftsmanship in innovative web development, eCommerce solutions, and mobile application development services ever since its inception. Roles & responsibilities Design, implement, and maintain scalable and reliable cloud infrastructure solutions using GCP and AWS services. Deploy, configure, and manage Kubernetes clusters on GCP and AWS, ensuring seamless integration with RabbitMQ. Collaborate with software development teams to optimize application deployment, monitoring, and performance in a cloud environment. Implement and manage RabbitMQ messaging queues for efficient and reliable communication between services. Develop and maintain CI/CD pipelines for automated application deployment and release management, including integration with RabbitMQ. Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to ensure consistent and repeatable deployments. Monitor and troubleshoot system and application performance, including RabbitMQ queue monitoring and optimization. Conduct regular audits and ensure compliance with industry best practices and security standards, especially regarding messaging queue security. Collaborate with cross-functional teams to identify and resolve infrastructure and deployment-related issues, with a focus on messaging queues. Stay up-to-date with the latest trends and technologies in the DevOps, cloud computing, and messaging queue domains, and evaluate their applicability to the organization. Skills required Bachelor's degree in computer science, engineering, or a related field (or equivalent work experience). Proven experience as a DevOps Engineer or similar role, with a focus on Kubernetes, GCP, AWS, and RabbitMQ. Strong knowledge of Kubernetes, including cluster management, deployment, and troubleshooting. Hands-on experience with GCP services, such as Compute Engine, Kubernetes Engine, Cloud Storage, and Cloud Networking. Familiarity with AWS services, including EC2, ECS/EKS, S3, RDS, and CloudFormation. Proficiency in scripting languages such as Python, Bash, or PowerShell for automation and infrastructure management. Experience with configuration management tools like Ansible, Chef, or Puppet. Solid understanding of CI/CD principles and experience with CI/CD tools like Jenkins, GitLab CI/CD, or CircleCI. Knowledge of containerization technologies like Docker and container orchestration platforms like Kubernetes. Knowledge on GoLang is a plus. Strong understanding of RabbitMQ, including setup, configuration, clustering, and message reliability. Strong problem-solving skills and the ability to troubleshoot complex issues in a distributed, cloud-based environment. Excellent communication and collaboration skills to work effectively in cross-functional teams.

Posted 17 hours ago

Apply

0 years

16 Lacs

Ahmedabad

On-site

Opening for Team Lead - Generative AI / AI-ML Specialist Role Overview: We’re seeking an experienced Data Scientist / Team Lead with deep expertise in Generative AI (GenAI) to design and implement cutting-edge AI models that solve real-world business problems. You’ll work with LLMs, GANs, RAG frameworks, and transformer-based architectures to create production-ready solutions across domains. Key Responsibilities: Design, develop, and fine-tune Generative AI models (LLMs, GANs, Diffusion models, etc.) Work on RAG (Retrieval-Augmented Generation) and transformer-based architectures for contextual responses and document intelligence Customize and fine-tune Large Language Models (LLMs) for domain-specific applications Build and maintain robust ML pipelines and infrastructure for training, evaluation, and deployment Collaborate with engineering teams to integrate models into end-user applications Stay current with the latest GenAI research, open-source tools, and frameworks Analyze model outputs, evaluate performance, and ensure ethical AI practices. Required Skills: Strong proficiency in Python and ML/DL libraries: TensorFlow, PyTorch, HuggingFace Transformers Deep understanding of LLMs, RAG, GANs, Autoencoders, and other GenAI architectures Experience with fine-tuning models using LoRA, PEFT, or similar techniques Familiarity with Vector Databases (e.g., FAISS, Pinecone) and embedding generation Experience working with datasets, data preprocessing, and synthetic data generation Good knowledge of NLP, prompt engineering, and language model safety Experience with APIs, model deployment, and cloud platforms (AWS/GCP/Azure) Nice to Have: Prior work with Chatbots, Conversational AI, or AI Assistants Familiarity with LangChain, LLMOps, or Serverless Model Deployment Background in MLOps, containerization (Docker/Kubernetes), and CI/CD pipelines Knowledge of OpenAI, Anthropic, Google Gemini, or Meta LLaMA models What We Offer: An opportunity to work on real-world GenAI products and POCs Collaborative environment with constant learning and innovation Competitive salary and growth opportunities 5-day work week with a focus on work-life balance Work from office Job Types: Full-time, Permanent Pay: Up to ₹1,600,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required) Work Location: In person

Posted 17 hours ago

Apply

6.0 years

8 - 9 Lacs

Surat

On-site

As a DevOps team leader for an IT company developing multiplayer games, mobile applications, web applications, and websites, your role and responsibilities would encompass a wide range of tasks to ensure the efficient development, deployment, and operation of these various software products. Responsibilities: Team Leadership: Lead and manage a team of DevOps engineers, ensuring clear communication, setting goals, and providing mentorship and guidance. DevOps Strategy: Develop and implement a comprehensive DevOps strategy tailored to the specific needs of the company's projects, considering factors such as technology stack, scalability requirements, and deployment environments. Continuous Integration/Continuous Deployment (CI/CD): Implement and maintain CI/CD pipelines for automating the build, testing, and deployment processes across multiple platforms (mobile, web, and desktop). Ensure smooth integration between development and operations teams, fostering collaboration and streamlining the release cycle. Infrastructure Management: Oversee the design, deployment, and management of cloud infrastructure (e.g., AWS, Azure, Google Cloud) to support the company's applications. Optimize infrastructure resources for performance, scalability, and cost-effectiveness, leveraging tools like Kubernetes, Docker, and Terraform. Monitoring and Incident Response: Implement monitoring and logging solutions to track the health and performance of applications and infrastructure components.Establish incident response protocols and lead the team in addressing and resolving production issues on time. Security and Compliance: Able to help Implement security best practices throughout the development and deployment lifecycle, including code reviews, vulnerability scanning, and access control mechanisms. Also able to provide solutions to solve security issues listed in vulnerability scanning reports. Performance Optimization: Collaborate with development teams to identify performance bottlenecks and implement optimizations to improve the responsiveness and scalability of applications. Documentation and Knowledge Sharing: Maintain comprehensive documentation for infrastructure configurations, deployment processes, and operational procedures. Facilitate knowledge sharing sessions and training programs to empower team members with relevant skills and expertise. Continuous Improvement: Regularly assess the effectiveness of DevOps processes and tools, soliciting feedback from team members and stakeholders. Identify areas for improvement and drive initiatives to enhance the efficiency, reliability, and security of the company's software delivery pipeline. Stakeholder Communication: Serve as a liaison between the DevOps team and other departments, providing updates on project statuses, addressing concerns, and soliciting input on infrastructure requirements. Dedication: DevOps Team Leader requires a high level of dedication and commitment to the company's technology vision, strategy, and goals. This means being available to work long hours when necessary. Key Performance Areas Deployment Efficiency: Measure the frequency and speed of deployments across various platforms. Aim to reduce deployment times and increase automation to streamline the release process. System Reliability: Monitor system uptime and availability to ensure a high level of reliability.Set targets for minimizing downtime and responding swiftly to incidents. Scalability and Performance: Evaluate the scalability of infrastructure and applications to handle increasing user loads. Security Compliance: Understanding and Identification of issues to applying solutions to resolve them. Cost Optimization: Monitor and control infrastructure costs, aiming to maximize cost-effectiveness while meeting performance requirements. Implement strategies to optimize resource utilization and minimize unnecessary expenses. Team Productivity: Measure the efficiency of the DevOps team in delivering on projects and resolving issues. Continuous Improvement: Track the implementation of improvements in the development and deployment processes.Measure the impact of changes on system performance, reliability, and team productivity. Customer Satisfaction: Gather feedback from internal and external stakeholders regarding the usability, performance, and reliability of software products. Knowledge Sharing and Collaboration: Assess the effectiveness of knowledge-sharing initiatives within the DevOps team and with other departments. Encourage collaboration between development, operations, and other teams to improve overall productivity and efficiency. Adoption of Best Practices: Monitor adherence to industry best practices in areas such as CI/CD, infrastructure as code, and security. Key Performance Indicators (KPIs): Team Leadership : Number of team goals achieved within set timelines.Employee satisfaction and retention rates within the DevOps team. Frequency and quality of communication within the team. DevOps Strategy: Percentage reduction in deployment failures or rollbacks. Time to deploy new features or updates. Alignment of DevOps strategy with overall business objectives. Continuous Integration/Continuous Deployment (CI/CD): Percentage of automated tests in the CI/CD pipeline. Average duration of build, test, and deployment cycles. Infrastructure Management: Server uptime and availability. Cost savings achieved through optimized infrastructure usage.Scalability of infrastructure to handle increasing workloads. Monitoring and Incident Response: Percentage reduction in critical incidents over time. Mean Time to Detect (MTTD) and Mean Time to Resolve (MTTR) incidents. Effectiveness of incident response processes based on post-incident reviews. Security and Compliance: Number of security vulnerabilities identified and remediated. Performance Optimization: Response time and throughput improvements in applications. Scalability of applications under load. User satisfaction ratings related to application performance. Documentation and Knowledge Sharing: Participation rates in knowledge sharing sessions. Completion and maintenance of documentation for infrastructure and processes. Continuous Improvement: Number of process improvements implemented per quarter. Percentage increase in efficiency or reliability of the software delivery pipeline. Stakeholder Communication: Timeliness of responses to stakeholder inquiries or concerns. Feedback from other departments on the usefulness of communication and updates provided. Requirements And Skills ​ At least 6 year of experience building sophisticated and highly automated infrastructure. Good understanding of cloud platforms like AWS, Azure, or Google Cloud Platform (GCP). Configuration management tools such as Ansible, Puppet, or Chef. Containerization and orchestration tools like Docker, Kubernetes, or OpenShift. Continuous Integration/Continuous Deployment (CI/CD) pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI. Good understanding of monitoring systems (Nagios etc), Logging solutions (Elastisearch etc) Knowledge of Linux, Windows and MacOSScripting languages like Python, Bash, or Ruby. Must experience in Lamp, MEAN/MERN server configuration. Knowledge of networking, security, and database management. Experience with microservices architecture and serverless computing. Team management and mentorship abilities. Ability to lead cross-functional teams and foster collaboration between development, operations, and QA teams. Strong decision-making skills and the ability to prioritize tasks effectively. Conflict resolution and problem-solving abilities. Excellent communication skills, both written and verbal. Ability to effectively communicate technical concepts to non-technical stakeholders. Job Type: Full-time Pay: ₹70,000.00 - ₹80,000.00 per month Benefits: Health insurance Work Location: In person

Posted 17 hours ago

Apply

3.0 - 6.0 years

3 - 5 Lacs

Ahmedabad

On-site

Your Role This hybrid role combines MLOps and data engineering to build robust, reproducible, and automated pipelines that support the end-to-end lifecycle of machine learning systems. What You’ll Be Doing Build ETL/ELT pipelines to ingest and transform data for ML training Automate model retraining, deployment, and monitoring processes Enable data and model version control Integrate model logging, exception handling, and alerting systems Maintain reproducibility, data lineage, and audit trails What We’d Love To See 3–6 years in data engineering or MLOps workflows Airflow, DBT, Apache Kafka/Spark, SQL, Python Docker, Kubernetes, MLflow, Prometheus/Grafana It’d Be Great If You Had GCP BigQuery, S3, Redshift, Snowflake What You Can Expect Opportunity to work with a diverse and well-experienced team. To be part of the team who creates phenomenal growth stories for worlds renowned brands. Professional Growth Roadmap. Real-time mentorship and guidance from the leaders. A workplace that invests in your career, cares for you and is fun & engaging. You can be yourself and do amazing work. Benefits Interested in joining our team of artists, geeks, strategizers, and writers? If you’re a passionate, talented individual, we want to hear from you. Competitive salary Flexible work-life balance with a 5-day working Policy Paid time off Learning & Development bonus Health coverage Rewards & Recognitions Event & Festivals celebrations Ongoing training programs Onsite opportunities Recognition opportunities for open-source contributions

Posted 17 hours ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Introduction: About Quranium In a world where rapid innovation demands uncompromising security, Quranium stands as the uncrackable foundation of the digital future. With its quantum-proof hybrid DLT infrastructure, Quranium is redefining what's possible, ensuring data safety and resilience against current and future threats, today. No other blockchain can promise this level of protection and continuous evolution. Quranium is more than a technology—it's a movement. Empowering developers and enterprises to build with confidence, it bridges the gaps between Web2 and Web3, making digital adoption seamless, accessible, and secure for all. As the digital superhighway for a better future, Quranium is setting the standard for progress in an ever-evolving landscape. Role Overview We are hiring a DevOps Engineer to architect and maintain the infrastructure supporting our blockchain nodes and Web3 applications. The ideal candidate has deep experience working with GCP, Azure, AWS, and modern hosting platforms like Vercel, and is capable of deploying, monitoring, and scaling blockchain-based systems with a security-first mindset. Key Responsibilities Blockchain Infrastructure Deploy, configure, and maintain core blockchain infrastructure such as full nodes, validator nodes, and indexers (e.g., Ethereum, Solana, Bitcoin) Monitor node uptime, sync health, disk usage, and networking performance Set up scalable RPC endpoints and archive nodes for dApps and internal use Automate blockchain client upgrades and manage multi-region redundancy Web3 Application DevOps Manage the deployment and hosting of Web3 frontends, smart contract APIs, and supporting services Create and maintain CI/CD pipelines for frontend apps, smart contracts, and backend services Integrate deployment workflows with Vercel, GCP Cloud Run, AWS Lambda, or Azure App Services Securely handle smart contract deployment keys and environment configurations Cloud Infrastructure Design and manage infrastructure across AWS, GCP, and Azure based on performance, cost, and scalability considerations Use infrastructure-as-code (e.g., Terraform, Pulumi, CDK) to manage provisioning and automation Implement cloud-native observability solutions: logging, tracing, metrics, and alerts Ensure high availability and disaster recovery for critical blockchain and app services Security, Automation, and Compliance Implement DevSecOps best practices across cloud, containers, and CI/CD Set up secrets management and credential rotation workflows Automate backup, restoration, and failover for all critical systems Ensure infrastructure meets required security and compliance standards Preferred Skills And Experience Experience running validators or RPC services for Proof-of-Stake networks (Ethereum 2.0, Solana, Avalanche, etc.) Familiarity with decentralized storage systems like IPFS, Filecoin, or Arweave Understanding of indexing protocols such as The Graph or custom off-chain data fetchers Hands-on experience with Docker, Kubernetes, Helm, or similar container orchestration tools Working knowledge of EVM-compatible toolkits like Foundry, Hardhat, or Truffle Experience with secrets management (Vault, AWS SSM, GCP Secret Manager) Previous exposure to Web3 infrastructure providers (e.g., Alchemy, Infura, QuickNode) Tools and Technologies Cloud Providers: AWS, GCP, Azure, Vercel DevOps Stack: Docker, Kubernetes, Terraform, GitHub Actions, CircleCI Monitoring: Prometheus, Grafana, CloudWatch, Datadog Blockchain Clients: Geth, Nethermind, Solana, Erigon, Bitcoin Core Web3 APIs: Alchemy, Infura, Chainlink, custom RPC providers Smart Contracts: Solidity, EVM, Hardhat, Foundry Requirements 3+ years in DevOps or Site Reliability Engineering Experience with deploying and maintaining Web3 infrastructure or smart contract systems Strong grasp of CI/CD pipelines, container management, and security practices Demonstrated ability to work with multi-cloud architectures and optimize for performance, cost, and reliability Strong communication and collaboration skills What You'll Get The opportunity to work at the intersection of blockchain infrastructure and modern cloud engineering A collaborative environment where your ideas impact architecture from day one Exposure to leading decentralized technologies and smart contract systems Flexible work setup and a focus on continuous learning and experimentation

Posted 17 hours ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

Job Description: He/She will play a crucial role in designing, developing, maintaining and testing high-performance applications. He/She will work closely with cross-functional teams to deliver scalable and robust solutions. His/Her expertise in Core Java, Spring Boot, and cloud technologies will be essential. Key Responsibilities: Design, develop, and maintain applications using Core Java and Spring Boot. Deploy and manage applications on Kubernetes. Implement messaging solutions using Kafka. Work with Oracle databases to design and optimize data storage solutions. Participate in the CI/CD process, utilizing Jenkins for automated deployments. Collaborate with the team using SCM tools like Bitbucket/Git. Document processes and solutions using JIRA and Confluence. Engage in new and challenging work, demonstrating a proactive and adaptive approach. Present technical solutions and project updates to stakeholders effectively. Leverage cloud technologies to enhance application performance and scalability. Apply Test Driven Development (TDD) methodologies using frameworks like Cucumber and BDD. Technical Experience: Proven experience in Core Java and Spring Boot. Hands-on experience with Kubernetes and Kafka. Strong knowledge of Oracle databases. Familiarity with CI/CD processes and tools such as Jenkins. Proficient in using SCM tools like Bitbucket/Git. Experience with JIRA and Confluence for documentation and project management. Strong presentation skills and the ability to communicate technical concepts to stakeholders. Knowledge of cloud technologies and their application in software development. Experience with Test Driven Development frameworks like Cucumber and BDD. A proactive attitude and willingness to take on new challenges. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 17 hours ago

Apply

8.0 years

0 Lacs

Andhra Pradesh

On-site

Test automation JD 8+ years of hands-on experience in Thought Machine Vault, Kubernetes, Terraform, GCP/AWS, PostgreSQL, CI/CD REST APIs, Docker, Kubernetes, Microservices Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Ensure compliance, implement monitoring and automation Guide developers on schema design and query optimization Conduct DB health audits and capacity planningCollaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or Master's degree in Computer Science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 17 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies