Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 years
0 Lacs
ahmedabad, gujarat, india
On-site
Job Description Job Title: Senior DevOps Engineer Experience: 15+ years overall software industry experience; 5+ years hands-on DevOps (Docker, Kubernetes, ELK)Job Summary: We are seeking a highly experienced Senior DevOps Engineer to join our dynamic team. The ideal candidate has a broad and deep background in the software industry, with a proven track record in designing, implementing, and managing modern DevOps solutions, especially using Docker, Kubernetes, and the ELK stack. You’ll be responsible for architecting, automating, and optimizing our applications and infrastructure to drive continuous integration, continuous delivery, and high reliability.Key Responsibilities: Design, implement, and manage scalable, secure, and highly available container orchestration platforms using Docker and Kubernetes. Develop and manage CI/CD pipelines, version control systems, and automation frameworks. Deploy, configure, and maintain monitoring/logging solutions leveraging the ELK stack (Elasticsearch, Logstash, Kibana). Collaborate closely with development, QA, and operations teams to establish best practices for infrastructure as code, configuration management, and release engineering. Drive efforts toward system reliability, scalability, and performance optimization. Troubleshoot and resolve issues in development, test, and production environments. Mentor and guide junior team members; contribute to DevOps process improvements and strategy. Ensure security, compliance, and governance are adhered to in all DevOps operations. Participate in on-call rotations for production support, when necessary. Required Skills and Qualifications: 15+ years overall experience in the software industry, with strong exposure to large-scale enterprise environments. At least 5 years of hands-on experience with: Docker: containerization, image management, best practices Kubernetes: architecture, deployment, scaling, upgrades, troubleshooting ELK stack: design, deployment, maintenance, performance tuning, dashboard creation Extensive experience with CI/CD tools (Jenkins, GitLab CI/CD, etc.) Proficiency with one or more scripting/programming languages (Python, Bash, Go, etc.) Strong background in infrastructure automation and configuration management (Ansible, Terraform, etc.) Solid understanding of networking, load balancing, firewalls, and security best practices. Experience with public cloud platforms (OCI, AWS, Azure, or GCP) is a plus. Strong analytical, problem-solving, and communication skills. Preferred Qualifications: Relevant certifications (CKA, CKAD, AWS DevOps Engineer, etc.) Experience with microservices architecture and service mesh solutions. Exposure to application performance monitoring (APM) tools Responsibilities Job Title: Senior DevOps Engineer Experience: 15+ years overall software industry experience; 5+ years hands-on DevOps (Docker, Kubernetes, ELK)Job Summary: We are seeking a highly experienced Senior DevOps Engineer to join our dynamic team. The ideal candidate has a broad and deep background in the software industry, with a proven track record in designing, implementing, and managing modern DevOps solutions, especially using Docker, Kubernetes, and the ELK stack. You’ll be responsible for architecting, automating, and optimizing our applications and infrastructure to drive continuous integration, continuous delivery, and high reliability.Key Responsibilities: Design, implement, and manage scalable, secure, and highly available container orchestration platforms using Docker and Kubernetes. Develop and manage CI/CD pipelines, version control systems, and automation frameworks. Deploy, configure, and maintain monitoring/logging solutions leveraging the ELK stack (Elasticsearch, Logstash, Kibana). Collaborate closely with development, QA, and operations teams to establish best practices for infrastructure as code, configuration management, and release engineering. Drive efforts toward system reliability, scalability, and performance optimization. Troubleshoot and resolve issues in development, test, and production environments. Mentor and guide junior team members; contribute to DevOps process improvements and strategy. Ensure security, compliance, and governance are adhered to in all DevOps operations. Participate in on-call rotations for production support, when necessary. Required Skills and Qualifications: 15+ years overall experience in the software industry, with strong exposure to large-scale enterprise environments. At least 5 years of hands-on experience with: Docker: containerization, image management, best practices Kubernetes: architecture, deployment, scaling, upgrades, troubleshooting ELK stack: design, deployment, maintenance, performance tuning, dashboard creation Extensive experience with CI/CD tools (Jenkins, GitLab CI/CD, etc.) Proficiency with one or more scripting/programming languages (Python, Bash, Go, etc.) Strong background in infrastructure automation and configuration management (Ansible, Terraform, etc.) Solid understanding of networking, load balancing, firewalls, and security best practices. Experience with public cloud platforms (OCI, AWS, Azure, or GCP) is a plus. Strong analytical, problem-solving, and communication skills. Preferred Qualifications: Relevant certifications (CKA, CKAD, AWS DevOps Engineer, etc.) Experience with microservices architecture and service mesh solutions. Exposure to application performance monitoring (APM) tools Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 day ago
3.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer and community banking - Data technology, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, opportunity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Hands-on practical experience in AWS cloud with services like Lambda, ECS, EKS, S3, DynamoDB, PostgreSQL, etc Good understanding of search capabilities like ElasticSearch, OpenSearch or GraphQL. Strong subject matter expertise in Python development. Should be able to build frameworks leveraging design patterns Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security, AWS and Terraform Demonstrated knowledge of software applications and technical processes within a technical discipline Good understanding of Data Catalog or Mata Data Management concept Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Exposure to cloud technologies ABOUT US
Posted 1 day ago
4.0 - 9.0 years
15 - 30 Lacs
bangalore/bengaluru
Work from Office
Total Experience : 4+ years Mode Of Hire: Permanent Required Skills Set (Mandatory): Object Oriented Programming, data structures, algorithms, software design, and database systems. Desired Skills (Good if you have): Springboot, ElasticSearch, MongoDB, MySQL, Redis, Crawling/ Web Scraping, Programming languages(Java, Python, C, C++) Job Responsibilities Collaborate with Managers and other Engineers to help define, scope, and implement high-quality features that solve critical user needs. Break down requirements into architecture and deliver code, while keeping operational issues in mind. The ability to own end-to-end responsibility right from the requirement to release. Write clear documentation so that other engineers can jump in and get things done. Actively participate in design and code reviews. Help take Tracxn to the next level as a world-class engineering team Job Requirements Experience with building backend services. Strong algorithm and CS skills. 4+ years of experience designing and implementing largescale distributed systems. Experience with multiple programming languages (Java, Python) and data stores (MongoDB, MySQL, Redis, etc) Proven ability to work in a fast-paced, agile, and in ownership, and results-oriented culture. Strong problem-solving and analytical skills. Culture Work with performance-oriented teams driven by ownership and passion. Learn to design systems for high accuracy, efficiency, and scalability. No strict deadlines focus on delivering quality work. Meritocracy-driven, candid culture. No politics. Very high visibility regarding which startups and markets are exciting globally. About Tracxn Tracxn (Tracxn.com) is a Bangalore-based product company providing a research and deal sourcing platform for Venture Capital, Private Equity, Corp Dev, and professionals working around the startup ecosystem. We are a team of 800+ working professionals serving customers across the globe. Our clients include Funds like Andreessen Horowitz, Matrix Partners, GGV Capital, and Large Corporations such as Citi, Embraer & Ferrero. Founders Neha Singh (ex-Sequoia, BCG | MBA - Stanford GSB) Abhishek Goyal (ex-Accel Partners, Amazon | BTech - IIT Kanpur) About Technology Team Tracxn's Technology team is 50+ members strong and growing. The technology team is subdivided into multiple smaller teams, each of which owns one or more services/components of the technology platform. Ours is a young team of motivated engineers with a minimal management structure where almost everyone is actively involved in technical development and design activities. We have a team-centric culture where the ownership and responsibility of a feature or module lie with a team as compared to an individual. We work on an array of technologies, including but not limited to Spring, Elastic Stack, Kafka, Mongo, MySQL, Redis, ReactJS, Next.js, Node, AWS Lambda, Ansible, etc. We value ownership, continuous learning, consistency, and discipline as a team.
Posted 1 day ago
0 years
0 Lacs
pune, maharashtra, india
On-site
Hello eager tech expert! To create a better future, you need to think outside the box. That’s why we at Siemens need innovators who aren’t afraid to push boundaries to join our diverse team of tech gurus. Got what it takes? Then help us create lasting, positive impact! You’ll break new ground by: Preparation of firewall rulesets based upon input from Application owners. Definition of needed AD groups and assignment of users. Define processes with application Management to maintain User Access Groups. Define and analyze Firewall Ruleset to support logical WAN / LAN location separation in the carve out projects. Analyze and Implement user-based firewalling by using Checkpoint Identity Awareness. Work with firewall management tools for documentation and optimization of rule base. Coordinate Firewall Changes policies and routing to support migrations. Trouble shoot Firewall Connectivity. Perform reporting of firewall performance data and log analyzation. Support troubleshooting of Identity Awareness Infrastructure. Operate firewall policy management infrastructure. You’re excited to build on your existing expertise, including : High level knowledge of Active Directory. TCP/IP Networking (Ports and Protocols). Fast understanding of Application and the necessary communications. Check Point R81.20. Azure based firewalls. Check Point Identity Awareness. Tufin for policy management. Linux Server RHEL 8 operations. Elasticsearch/Splunk. Zscaler private access. Create a better #TomorrowWithUs! We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Protecting the environment, conserving our natural resources, fostering the health and performance of our people as well as safeguarding their working conditions are core to our social and business commitment at Siemens. This role is based in Pune. You’ll also get to visit other locations in India and beyond, so you’ll need to go where this journey takes you. In return, you’ll get the chance to work with international team and working on global topics.
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
gurgaon, haryana, india
On-site
BhaiFi Networks Private Limited is a leading provider of AI-powered cybersecurity solutions, designed to safeguard businesses - especially small and medium-sized enterprises (SMEs) - against evolving digital threats. Founded in 2017, our mission is to democratize cybersecurity by making enterprise-grade protection accessible and easy to use for businesses of all sizes, even those with lean or non-technical teams. We offer two flagship products: BhaiFi – AI-Powered Guest WiFi: https://bhaifi.ai FirewallX – An AI-First Network Security and Management Platform: https://firewallx.ai With features like advanced firewall protection, intrusion detection, secure VPN, real-time threat intelligence, and more, we help our customers secure their networks with ease. We’re a lean, high-impact team redefining how modern cybersecurity is built. Requirements Job Description: The Data Engineer will be responsible for designing, implementing, and maintaining the data infrastructure that powers our network security machine learning project. This role involves creating efficient data pipelines, ensuring data quality, and collaborating closely with the Data Scientist Lead and ML/LLM Developer and to support model deployment and operation. Key Responsibilities: Design and implement scalable data pipelines for ingesting and processing network security data Perform data preprocessing and feature engineering to prepare data for machine learning models Set up and manage data storage solutions, including Elasticsearch Handle model deployment and implement DevOps practices Develop comprehensive testing strategies for data pipelines and deployed models Ensure data quality, integrity, and availability for machine learning models Collaborate with the team to optimize data flow and model performance Required Skills: Bachelor's or Master's degree in Computer Science, Software Engineering, or related field 3-5 years of experience in data engineering Strong programming skills in Python Expertise in big data technologies (Hadoop, Spark, Hive) Proficiency in SQL and experience with various database systems (PostgreSQL, MySQL, MongoDB) Experience with data pipeline tools (Apache Airflow) Familiarity with Elasticsearch for efficient data storage and retrieval Experience with stream processing frameworks (Apache Kafka, Apache Flink) Proficiency in version control systems (Git) Understanding of data modelling and ETL processes Experience with real-time data processing and analytics Knowledge of machine learning deployment processes Familiarity with network protocols and security concepts Experience with containerization and orchestration (Docker, Kubernetes) Experience with CI/CD tools (Jenkins, GitLab CI)
Posted 1 day ago
1.0 years
0 Lacs
gurugram, haryana, india
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: · Build, Train, and Deploy ML Models using Python on Azure/AWS · 1+ years of Experience in building Machine Learning and Deep Learning models in Python · Experience on working on AzureML/AWS Sagemaker · Ability to deploy ML models with REST based APIs · Proficient in distributed computing environments / big data platforms (Hadoop, Elasticsearch, etc.) as well as common database systems and value stores (SQL, Hive, HBase, etc.) · Ability to work directly with customers with good communication skills. · Ability to analyze datasets using SQL, Pandas · Experience of working on Azure Data Factory, PowerBI · Experience on PySpark, Airflow etc. · Experience of working on Docker/Kubernetes Mandatory skill sets: Data Science, Machine Learning Preferred skill sets: Data Science, Machine Learning Years of experience required: 4 - 8 Education qualification: B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 day ago
6.0 years
60 - 65 Lacs
gurugram, haryana, india
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MLOps, Python, Scalability, VectorDBs, FAISS, Pinecone/ Weaviate/ FAISS/ ChromaDB, Elasticsearch, Open search Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
2.5 years
0 Lacs
hyderābād
On-site
Senior Solutions Manager Hyderabad 2.5-5 Years INDIA Job Family Engineering Job Description (Posting). Moogsoft JD:Who We Are Looking for And Requirements - Previous experience providing technical support and Consulting for Moogsoft or similar tool. Customer service skills - You are personable and communicative via voice and text with technical and non-technical people. You enjoy solving customer issues.Summary - You can analyse complex customer issues by providing guidance and step-by-step documentation to resolve them. You will escalate issues if needed.-Attend meetings with customers to review status of open or escalated issues, and you will provide status as needed.-You have solid experience with RHEL/CentOS and are not afraid of the command line or basic system administration.-Familiar with MySQL and can construct basic SQL queries.-Knowledge of some of the following: Nginx, Apache Tomcat, ElasticSearch, RabbitMQ, REST, JSON, TCP/IP Networking, JavaJVM.-Self-starter who is motivated by success and can manage time effectively.-Will be required to work outside normal business hours and on weekends as required.Requirements - 5+ year experience in tools/infrastructure-Solid scripting skills (e.g. JavaScript(must), Shell script, Perl script)-Previous work on event management tools like Truesight or Netcool is preferable-Rabbitmq, Apache tomcat, Mysql and elastic search knowledge is preferable-Strong working knowledge of elementary tools and ITSM tools along with CMDB-Good understanding of alarm point notification tools as xMatters and pager duty and dashboards like grafana-Continuous Integration/Delivery/Deployment -Sound knowledge and experience in creating HLDs and LLDs-Experience with security best practices in server configuration, tool development, and access controls-Development/Testing background would be an added advantage Qualification B-Tech No. of Positions 1 Skill (Primary) Cloud Services-Platform Engineering-EMS Platform Auto req ID 1591010BR
Posted 1 day ago
5.0 years
3 - 4 Lacs
hyderābād
On-site
Arcadis is the world's leading company delivering sustainable design, engineering, and consultancy solutions for natural and built assets. We are more than 36,000 people, in over 70 countries, dedicated to improving quality of life. Everyone has an important role to play. With the power of many curious minds, together we can solve the world’s most complex challenges and deliver more impact together. Role description: Arcadis Development teams within our Intelligence division deliver complex solutions and push the limits of technology solutions. Our talented groups of systems professionals do more than just write code and debug – they make a significant impact on the design and development of state-of-the-art projects. We are looking for a DevOps Engineer to join our growing and dynamic product team. Role accountabilities: Proficiency in various operating systems: Windows, Linux, Hyper-V, VMWare, and Unix. Secure infrastructure management, including firewall security and compliance. Network devices and appliances installation and maintenance. Collaborate on security requirements and perform vulnerability testing. Develop disaster recovery plans and oversee network backups. Build and maintain on-premises and cloud infrastructures ensuring system availability and performance. Troubleshoot system issues with Agile methodologies expertise. Implement CI/CD pipelines, automate configuration management with Ansible, and enforce DevOps practices. Automate system alerts and enhance development/release processes. Lead automation efforts, maintain security standards, and manage infrastructure code with Puppet. Qualifications & Experience: Experience : 5+ years of hands-on DevOps experience in Linux-based systems. Technologies : Familiarity with network technologies (Cisco, Juniper, HPE), cloud platforms (AWS, Azure, GCP, VMware, OpenStack), and CI/CD tools (Jenkins, Ansible, GitHub). Containerization & Infrastructure as Cod e: Strong knowledge of Docker; experience with Ansible, docker-compose, and implementing CI/CD pipelines. AWS Expertise : Proficiency in AWS services such as EC2, EBS, VPC, ELB, SES, Elastic IP, and Route53. Monitoring & Security: Experience with monitoring tools (Prometheus, Grafana, Sentry) and routine security scanning, SSL certificate setup, and network performance troubleshooting. Databases & Logging : Expertise in high availability databases (Postgres, Elasticsearch), log analysis in distributed applications, and managing on-premises infrastructure (e.g., Xen Server). Agile Processes: Experience in Change Management, Release Management, and DNS Management within Agile methodology. Orchestration & Scripting : Proficiency in orchestration technologies (Docker Swarm/Kubernetes) and scripting (Bash, Python). Additional Skills : Knowledge of web applications/protocols, BI tools (e.g., Tableau), and load testing with JMeter. Why Arcadis? We can only achieve our goals when everyone is empowered to be their best. We believe everyone's contribution matters. It’s why we are pioneering a skills-based approach, where you can harness your unique experience and expertise to carve your career path and maximize the impact we can make together. You’ll do meaningful work, and no matter what role, you’ll be helping to deliver sustainable solutions for a more prosperous planet. Make your mark, on your career, your colleagues, your clients, your life and the world around you. Together, we can create a lasting legacy. Join Arcadis. Create a Legacy. Our Commitment to Equality, Diversity, Inclusion & Belonging We want you to be able to bring your best self to work every day which is why equality and inclusion is at the forefront of all our activities. Our ambition is to be an employer of choice and provide a great place to work for all our people. We are an equal opportunity employer; women, minorities, and people with disabilities are strongly encouraged to apply. We are dedicated to a policy of non-discrimination in employment on any basis including race, caste, creed, colour, religion, sex, age, disability, marital status, sexual orientation, and gender identity.
Posted 1 day ago
8.0 years
0 Lacs
mohali district, india
On-site
Job Title: Senior DevOps Engineer – Travel Domain Location: Mohali Employment Type: Full-time Experience: 8+ years (minimum 2 years in travel domain preferred) Key Responsibilities: ● Architect, deploy, and manage highly available AWS infrastructure using services like EC2, ECS/EKS, Lambda, S3, RDS, DynamoDB, VPC, CloudFront, CloudFormation/Terraform, and more. ● Manage and integrate Active Directory and AWS Directory Service for centralized user authentication and role-based access controls. ● Design and implement network security architectures, including firewall rules, security groups, AWS Network Firewall, and WAF. ● Build and maintain centralized logging, monitoring, and alerting solutions using ELK Stack (Elasticsearch, Logstash, Kibana), Amazon OpenSearch, CloudWatch, or similar tools. ● Implement and maintain CI/CD pipelines (Jenkins, GitLab CI, AWS CodePipeline) for automated deployments and infrastructure updates. ● Conduct security assessments, vulnerability management, and implement controls aligned with PCI-DSS, CIS benchmarks, and AWS security best practices. ● Configure encryption (at rest and in transit), manage IAM policies, roles, MFA, KMS, and secrets management. ● Perform firewall rule management, configuration, and auditing in cloud and hybrid environments. ● Automate routine infrastructure tasks using scripts (Python, Bash, Go) and configuration management tools (Ansible, Chef, or Puppet). ● Design and maintain disaster recovery plans, backups, and cross-region failover strategies. Good to have: ● AWS Certifications: DevOps Engineer, Solutions Architect, Security Specialty. ● Experience participating in PCI-DSS audits or implementing PCI controls. ● Familiarity with zero-trust security architectures and microsegmentation. ● Hands-on with cloud cost management tools (AWS Cost Explorer, CloudHealth, etc.). ● Experience in travel domain projects: OBT, GDS integrations, transportation APIs.
Posted 1 day ago
0 years
0 Lacs
india
On-site
About Application At Application, we’re building a powerful platform to connect exceptional talent with top-tier employers across industries. Our mission is to revolutionize recruitment through a modern tech stack, seamless user experience, intelligent search, and secure cloud infrastructure. --- Role Overview We are looking for a passionate Full Stack Developer with hands-on experience in building dynamic and scalable web applications using React.js, **Node.js, **GraphQL, and **MySQL. You’ll work closely with our UI/UX designers, backend engineers, and product managers to deliver high-quality features and ensure performance, reliability, and security of the platform. ## Key Responsibilities * Develop, test, and deploy scalable full-stack features for our Job Portal platform. * Build clean, responsive UI components using React.js, **Tailwind CSS, and **Framer Motion. * Design efficient GraphQL APIs with Node.js, **Express.js, and **Apollo Server. * Write optimized MySQL queries and manage relational data effectively. * Integrate with external APIs (e.g. Razorpay, **SendGrid, **Google Maps, **reCAPTCHA). * Implement JWT-based authentication and role-based access control. * Deploy on AWS (EC2, S3, RDS), manage servers with Nginx and PM2. * Use Postman, **Insomnia, **Jest, **Supertest for testing and debugging. * Maintain documentation, API references, and code comments for developer collaboration. * Collaborate in Agile sprints using Jira, **Trello, or similar tools. --- ## Tech Stack You’ll Use * Frontend: HTML, CSS, JavaScript, React.js, Tailwind CSS, Framer Motion * Backend: Node.js, Express.js, GraphQL (Apollo Server) * Database: MySQL, Sequelize ORM, Redis, Elasticsearch * Security: JWT, Bcrypt, HTTPS, reCAPTCHA v3 * Testing: Postman, Jest, Supertest --- ## Requirements * Bachelor's degree in Computer Science, IT, or related field (or equivalent work experience) * 0 months of experience in Full Stack Development (React.js + Node.js) * Solid grasp of GraphQL, API development, and query design * Experience with MySQL, joins, indexing, and query optimization * Working knowledge of Redis and Elasticsearch is a plus * Familiarity with AWS services and cloud deployment * Git version control and collaborative coding workflow experience * Strong debugging and testing skills using tools like Postman Job Types: Full-time, Fresher, Internship Contract length: 6 months Pay: ₹3,000.00 - ₹7,000.00 per month Benefits: Health insurance Work Location: In person
Posted 1 day ago
8.0 years
20 - 50 Lacs
india
Remote
Required Qualifications 8+ years designing, building, and operating distributed backend systems with Java & Spring Boot Proven experience leading or mentoring engineers; direct people-management a plus Expert knowledge of AWS services and cloud-native design patterns Hands-on mastery of Elasticsearch, PostgreSQL/MySQL, and Redis for high-volume, low-latency workloads Demonstrated success scaling systems to millions of users or billions of events Strong grasp of DevOps practices: containerization (Docker), CI/CD (GitHub Actions), observability stacks Excellent communication and stakeholder-management skills in a remote-fi rst environment Skills: aws,java,spring boot,elasticsearch,postgresql
Posted 1 day ago
1.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Job Description- Software Engineer (Backend) About PhonePe PhonePe is India’s leading digital payments platform with 500 Million+ registered users. Using PhonePe, users can send and receive money, recharge mobile, DTH, data cards, pay at stores, make utility payments, buy gold, and make investments. PhonePe went live for customers in August 2016 and was the first non-banking UPI app and offered money transfer to individuals and merchants, recharges and bill payments to begin with. In 2017, PhonePe forayed into financial services with the launch of digital gold, providing users with a safe and convenient option to buy 24-karat gold securely on its platform. PhonePe has since launched Mutual Funds and Insurance products like tax-saving funds, liquid funds, international travel insurance, Corona Care, a dedicated insurance product for the COVID-19 pandemic among others. PhonePe launched its Switch platform in 2018, and today its customers can place orders on over 300 apps including Ola, Myntra, IRCTC, Goibibo, RedBus, Oyo etc. directly from within the PhonePe mobile app. PhonePe is accepted at over 18 million merchant outlets across 500 cities nationally. Culture At PhonePe, we take extra care to make sure you give your best at work, Everyday! And creating the right environment for you is just one of the things we do. We empower people and trust them to do the right thing. Here, you own your work from start to finish, right from day one. Being enthusiastic about tech is a big part of being at PhonePe. If you like building technology that impacts millions, ideating with some of the best minds in the country and executing on your dreams with purpose and speed, join us! Challenges Building for Scale, Rapid Iterative Development, and Customer-centric Product Thinking at each step defines every day for a developer at PhonePe. Though we engineer for a 50 million+ strong user base, we code with every individual user in mind. While we are quick to adopt the latest in Engineering, we care utmost for security, stability, and automation. Apply if you want to experience the best combination of passionate application development and product-driven thinking Role & Responsibilities ● Build Robust and scalable web-based applications. You will need to think of platforms & reuse. ● Build abstractions and contracts with separation of concerns for a larger scope. ● Drive problem-solving skills for high-level business and technical problems. ● Do high-level design with guidance; Functional modeling, break-down of a module. ● Do incremental changes to architecture: impact analysis of the same. ● Do performance tuning and improvements in large scale distributed systems. ● Mentor young minds and foster team spirit, break down execution into phases to bring predictability to overall execution. ● Work closely with Product Manager to derive capability views from features/solutions, Lead execution of medium-sized projects. ● Work with broader stakeholders to track the impact of projects/features and proactively iterate to improve them. Requirements- ● Strong experience in the art of writing code and solving problems on a Large Scale (FinTech experience preferred). ● B.Tech, M.Tech, or Ph.D. in Computer Science or related technical discipline (or equivalent). ● Excellent coding skills – should be able to convert the design into code fluently. Experience in at least one general programming language (e.g. Java, C, C++) & tech stack to write maintainable, scalable, unit-tested code. ● Experience with multi-threading, concurrency programming, object-oriented design skills, knowledge of design patterns, and huge passion and ability to design intuitive modules, class-level interfaces and knowledge of Test driven development. ● Good understanding of databases (e.g. MySQL) and NoSQL (e.g. HBase, Elasticsearch, Aerospike, etc). ● Experience in full life cycle development in any programming language on a Linux platform and building highly scalable business applications, which involve implementing large complex business flows and dealing with a huge amount of data. ● Strong desire for solving complex and interesting real-world problems. ● Go-getter attitude that reflects in energy and intent behind assigned tasks ● An open communicator who shares thoughts and opinions frequently listens intently and takes constructive feedback. ● Ability to drive the design and architecture of multiple subsystems. ● Ability to break-down larger/fuzzier problems into smaller ones in the scope of the product ● Understanding of the industry’s coding standards and an ability to create appropriate technical documentation. ● Experience of having been a software engineer for at least 1+ years to 3 years of experience
Posted 1 day ago
7.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Title: API Tester Experience: 5–7 Years Location - Hyderabad Skills - CI/CD, Kubernetes, API Testing, Automation Testing, Kafka-based workflows Key Responsibilities: Design, develop, and execute automated functional and performance tests for RESTful APIs. Ensure seamless integration testing with Kafka messaging systems . Validate end-to-end Kafka-based workflows , including message production and consumption across distributed services. Work in a containerized OpenShift environment , supporting microservices-based architectures. Maintain and enhance automated test scripts , test cases, and test data. Use tools such as Postman, JMeter, and custom automation frameworks to validate APIs. Collaborate with DevOps teams to integrate automated testing into CI/CD pipelines . Monitor and analyze logs using the ELK stack (Elasticsearch, Logstash, Kibana) for debugging and validation. Report and track defects, and participate in root cause analysis and test plan reviews. Contribute to improving overall QA strategy and test coverage.
Posted 1 day ago
1.0 years
0 Lacs
hyderabad, telangana, india
On-site
As a Fullstack SDE1 at NxtWave, you Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. Build reusable, maintainable frontend components using modern state management practices. Develop backend services in Node.js or Python, adhering to clean-architecture principles. Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. Containerize applications and configure CI/CD pipelines for automated builds and deployments. Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills 1+ years of experience building full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. Hands-on with state management patterns (Redux, MobX, or custom solutions). Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). Expertise in designing REST and/or GraphQL APIs and integrating with backend services. Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad
Posted 1 day ago
5.0 years
0 Lacs
india
Remote
About Us At SentinelOne, we’re redefining cybersecurity by pushing the limits of what’s possible—leveraging AI-powered, data-driven innovation to stay ahead of tomorrow’s threats. From building industry-leading products to cultivating an exceptional company culture, our core values guide everything we do. We’re looking for passionate individuals who thrive in collaborative environments and are eager to drive impact. If you’re excited about solving complex challenges in bold, innovative ways, we’d love to connect with you. What are we looking for? We are looking for a seasoned Senior Software Engineering Leader with a strong background in building cloud-based products using Java technology. In this role, you will collaborate closely with internal teams to implement new features, own the end-to-end development of server-side components, and ensure high-performance and scalable deployments. You will work closely with QA teams to deliver high-quality products and interface with customer-facing teams to implement feature requirements effectively. This role demands a high level of ownership, a strong learning quotient, and meticulous attention to detail. What will you do? Build next-generation cloud-based products using Java technology. Actively collaborate with internal teams to implement new and exciting product features. Own the end-to-end development of server-side components, from design to deployment. Identify performance and scalability requirements and optimize for large deployments. Work closely with the Quality Assurance team to ensure high-quality deliverables to customers. Interface with customer-facing teams to understand and implement feature requirements. What experience or knowledge should you bring? 5 to 10 years of experience in core Java development. Total 15+ years of Industry Experience. Proficiency in Java programming with a deep understanding of the Java language. Experience deploying Java-based applications in Jetty or any popular web application server. Strong knowledge of algorithm design and data structures. Advantages: Experience in microservices architecture using Spring is desirable. Familiarity with database technologies like Elasticsearch is desirable. Experience in developing and deploying cloud-native applications in AWS is Desirable. Strong networking fundamentals and sound TCP/IP knowledge is a plus Why us? You will be joining a cutting-edge company, where you will tackle extraordinary challenges and work with the very best in the industry along with competitive compensation. Flexible working hours and hybrid/remote work model. Flexible Time Off. Flexible Paid Sick Days. Global gender-neutral Parental Leave (16 weeks, beyond the leave provided by the local laws) Generous employee stock plan in the form of RSUs (restricted stock units) On top of RSUs, you can benefit from our attractive ESPP (employee stock purchase plan) Gym membership/sports gears by Cultfit. Wellness Coach app, with 3,000+ on-demand sessions, daily interactive classes, audiobooks, and unlimited private coaching. Private medical insurance plan for you and your family. Life Insurance covered by S1 (for employees) Telemedical app consultation (Practo) Global Employee Assistance Program (confidential counseling related to both personal and work life matters) High-end MacBook or Windows laptop. Home-office-setup allowances (one time) and maintenance allowance. Internet allowances. Provident Fund and Gratuity (as per govt clause) NPS contribution (Employee contribution) Half yearly bonus program depending on the individual and company performance. Above standard referral bonus as per policy. Udemy Business platform for Hard/Soft skills Training & Support for your further educational activities/trainings Sodexo food coupons. SentinelOne is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. SentinelOne participates in the E-Verify Program for all U.S. based roles.
Posted 1 day ago
2.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Position Summary: Pentair is currently seeking Managed Services CloudOps for IoT projects in the Smart Products & IoT Strategic Innovation Centre in India team. This role is responsible for supporting managed services & application/product Operations for IoT projects. Duties & Responsibilities: Apply best practices and strategies regarding Prod application and infrastructure Maintenance (Provisioning/Alerting/Monitoring etc.) Knowledge & Purpose of various env QA, UAT/Staging, Prod. Understanding Git, AWS Code Commit. Hotfix & Sequential configuration process need to follow up. Understanding of Repositories. Understanding & use of CI/CD Pipelines. AWS CLI use & Implementation. Ensure application & AWS Infrastructure proactive monitoring- Realtime Monitoring of AWS Services. CloudWatch alert configurations. Alerts configuration in third-party tool like Newrelic. Datadog, Splunk etc. Awareness of Pre & Post Deployment changeset in AWS infrastructure Managing cloud environments in accordance with company security guidelines. Config Register Management. Daily data monitoring of deployed services. Apply Best security practices for deployed Infrastructure. Suggest regular optimization of infra by upscale & downscale. Troubleshoot incidents, identify root cause, fix and document problems, and implement preventive measures Lambda Logs Configuration. API logs Configuration. Better understanding of CloudWatch log insights. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Employ exceptional problem-solving skills, with the ability to see and solve issues before they affect business productivity. Have Experience in CloudOps Process. Participate in all aspects of the software development life cycle for AWS solutions, including planning, requirements, development, testing, and quality assurance. Various AWS accounts Billing management/analysis and alert configurations as per the defined threshold. Understanding of AWS billing console. Able to analyze daily/Monthly costing of OnDemand services. Python & Bash scripting is must to automate the regular task like Data fetch from S3/DDB, Job deployment. Qualifications and Experience: Mandatory Bachelors degree in Electrical Engineering, Software Engineering, Computer Science, Computer Engineering, or related Engineering discipline. 2+ years of experience in Cloud Operations & Monitoring of AWS serverless services. 1+ years of experience in the Smart/Connected Products & IoT workflow . Hands on experience in Mobile or Web App issues troubleshooting AWS platform or certified in AWS (SysOPS or DevOPS) Server-less/headless architecture Lambda, API Gateways, Kinesis, ElasticSearch, ElasticCache, Dynamo DB, Athena, AWS IoT, Codecommit, Cloudtrail, Codebuild. Cloud formation template understanding for configuration changes. NoSQL Database (Dynamo DB preferred). Trouble ticketing tools ( Jira Software & Jira Service Desk preferred) Good hands-on experience in scripting languages: Python,Bash,Node,Gitbash,CodeCommit Experience of impact analysis for Infrastructure configuration change. Preferred Hands on experience on Newrelic/Kibana/Splunk and AWS Cloudwatch tools Prior experience in operation support for IoT projects (50,000+ live devices) will be an added advantage, Experience in AWS Cloud IoT Core platform. L2 Support experience in addition to CloudOps. Skills and Abilities Required: Willingness to work in a 24X7 shifts environment Flexible to take short term travels on a short notice to facilitate the field trails & soft launch of products Excellent troubleshooting & analytical skills Highly customer-focused and always eager to find a way to enhance customer experience Able to pinpoint business needs and deliver innovative solutions Can-do positive attitude, always looking to accelerate development. Self-driven & committed to high standards of performance and demonstrate personal ownership for getting the job done. Innovative and entrepreneurial attitude; stays up to speed on all the latest technologies and industry trends; healthy curiosity to evaluate, understand and utilize new technologies. Excellent verbal & written communication skills
Posted 1 day ago
1.0 years
0 Lacs
hyderabad, telangana, india
On-site
About the job About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: ● Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ● ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ● ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ● ‘The Greatest Brand in Education’ in a research-based listing by URS Media ● NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education. NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle Why NxtWave As a Fullstack SDE1 at NxtWave, you ● Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) ● Get to take ownership of the features you build and work closely with the product team ● Work in a great culture that continuously empowers you to grow in your career ● Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) ● NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly ● Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities ● Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. ● Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. ● Build reusable, maintainable frontend components using modern state management practices. ● Develop backend services in Node.js or Python, adhering to clean-architecture principles. ● Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. ● Containerize applications and configure CI/CD pipelines for automated builds and deployments. ● Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. ● Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. ● Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills ● 1+ years of experience building full-stack web applications. ● Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). ● Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. ● Hands-on with state management patterns (Redux, MobX, or custom solutions). ● Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). ● Expertise in designing REST and/or GraphQL APIs and integrating with backend services. ● Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). ● Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. ● Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. ● Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. ● Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: ● The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software ● Strong collaboration abilities and a flexible & friendly approach to working with teams ● Strong determination with a constant eye on solutions ● Creative ideas with problem solving mind-set ● Be open to receiving objective criticism and improving upon it ● Eagerness to learn and zeal to grow ● Strong communication skills is a huge plus Work Location: Hyderabad
Posted 1 day ago
1.0 years
0 Lacs
hyderabad, telangana, india
On-site
As a Fullstack SDE1 at NxtWave, you Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. Build reusable, maintainable frontend components using modern state management practices. Develop backend services in Node.js or Python, adhering to clean-architecture principles. Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. Containerize applications and configure CI/CD pipelines for automated builds and deployments. Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills 1+ years of experience building full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. Hands-on with state management patterns (Redux, MobX, or custom solutions). Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). Expertise in designing REST and/or GraphQL APIs and integrating with backend services. Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad
Posted 1 day ago
5.0 years
0 Lacs
india
Remote
Role: Lead Engineer - Databases Type: Individual Contributor Work Type: Remote Should have experience in 2 Databases: a)Elastic Search + Clickhouse (OR) b)Elastic Search + Mongo DB Reporting to: Sr EM (Data & Platform team) About the Role: We are seeking an experienced Lead Engineer – Databases with deep expertise in Elasticsearch, Click house, and other modern data systems. You will lead the design, development, and optimization of scalable, high-throughput database solutions that power key products and internal systems. This role emphasizes performance tuning, efficient data modeling, and the deployment of cloud-native database architectures that can handle both real-time and batch processing at scale. Responsibilities: Design and optimize usage of Elasticsearch (e.g., Amazon OpenSearch) and Clickhouse for operational and analytical workloads Develop efficient, scalable data models for document, search, and columnar storage patterns Continuously monitor and improve query performance, indexing strategies, and cost-effectiveness of managed services Collaborate closely with engineering, data, and DevOps teams to integrate database best practices into applications and pipelines Ensure high availability and performance of structured, semi-structured, and unstructured data workloads Define and enforce best practices for schema governance, access control, and data lifecycle management Evaluate and benchmark new managed database services to support evolving product needs Automate performance monitoring and alerting using tools such as Datadog, CloudWatch, or equivalents Requirements: 5+ years of experience working with cloud-managed database platforms Deep expertise in Elasticsearch/OpenSearch (indexing, search relevance, aggregation) and Clickhouse (columnar storage, high-throughput queries) Strong experience with SQL and NoSQL systems in production environments at scale Proven ability to optimize query performance and storage in high-scale environments Experience integrating databases with application stacks and streaming pipelines (e.g., Kafka) Proficiency in at least two programming languages (e.g., NodeJS, Go, Rust) for automation and tooling Understanding of cloud-native security, RBAC, and compliance requirements Experience with database observability, monitoring, and alerting systems Preferred Skills: Experience with managed databases such as Firestore, MongoDB, or Redis Exposure to other managed data platforms like BigQuery, DynamoDB, Snowflake, etc. Familiarity with distributed systems, multi-region data patterns, and high-availability strategies Background in building or supporting analytics, observability, or search platforms.
Posted 2 days ago
5.0 years
0 Lacs
chennai, tamil nadu, india
On-site
C1X AdTech Pvt Ltd is a fast-growing product and engineering-based AdTech company building next-generation advertising and marketing technology platforms. Our mission is to empower enterprise clients with the smartest marketing solutions, enabling seamless integration with personalization engines and delivering cross-channel marketing capabilities. We focus on enhancing customer engagement and experiences while driving measurable growth in Lifetime Value (LTV). As part of our world-class engineering team , you will be working across front end (UI), back end (API/Java), Big Data, and Real-Time Bidding (RTB) systems to deliver high-performance, scalable products. Role Overview As an RTB Engineer , you will be a key contributor in building and scaling our RTB servers, integrating with top global exchanges, ensuring ultra-low latency performance, and continuously optimizing for business needs. This role demands strong AdTech domain knowledge, expertise in RTB protocols, and a passion for solving complex engineering problems. Objectives Design and maintain scalable, efficient backend code with seamless third-party integrations. Build robust, high-performance, and resilient back-end systems for RTB and AdTech platforms. Manage integrations with leading SSP and DSP partners across the industry. Responsibilities Design, develop, and maintain RTB systems using Java and Spring Boot. Integrate with global ad exchanges via OpenRTB and other protocols. Conduct code and design reviews to ensure best practices and high-quality deliverables. Monitor, analyze, and optimize RTB server performance (latency, throughput, GC tuning). Collaborate with product teams to translate business requirements into scalable solutions. Implement, manage, and monitor ad campaigns while troubleshooting technical issues. Work with large-scale datasets to identify optimization opportunities and improve system efficiency. Drive continuous improvements in reliability, scalability, and fault-tolerance. Qualifications Bachelor’s degree in Computer Science/Engineering or equivalent. Strong knowledge of data structures, algorithms, and distributed system design. 3–5 years’ experience in digital advertising (display, video, mobile). 2+ years of hands-on experience with RTB systems. Deep understanding of OpenRTB protocol. Expertise in Java, Java EE/J2EE, and Spring Boot. Experience with Hibernate, NIO, and microservice architectures. Strong debugging skills (memory leaks, thread starvation, heap/thread dumps). Understanding of Garbage Collection policies and JVM tuning. Experience with Kafka, ELK stack (Elasticsearch, Logstash, Kibana) is a plus. Knowledge of server-side scripting and performance profiling tools is desirable.
Posted 2 days ago
8.0 years
0 Lacs
chennai, tamil nadu, india
On-site
C1X AdTech Pvt Ltd is a fast-growing product and engineering-driven AdTech company building next-generation advertising and marketing technology platforms. Our mission is to empower enterprise clients with the smartest marketing solutions, enabling seamless integration with personalization engines and delivering cross-channel marketing capabilities. We focus on enhancing customer engagement, system reliability, and delivering scalable, high-performance products that increase customer Lifetime Value (LTV). Our engineering team covers front end (UI), back end (Java/Node.js APIs), Big Data, and DevOps/SRE , working together to build world-class platforms for digital advertising and marketing. Role Overview As a DevOps Engineer , you will be responsible for managing cloud-native infrastructure, supporting CI/CD pipelines, and ensuring system reliability and scalability. You will automate deployment processes, maintain server environments, monitor performance, and collaborate with development and QA teams across the product lifecycle. Objectives Design and manage scalable, cloud-native infrastructure using GCP, Kubernetes, and Argo CD. Implement observability and monitoring tools (ELK, Prometheus, Grafana) for complete system visibility. Enable real-time data pipelines using Apache Kafka and GCP DataProc. Automate CI/CD pipelines using GitHub Actions with GitOps best practices. Responsibilities Build, manage, and monitor Kubernetes (GKE) clusters and containerized workloads with Argo CD. Design and maintain CI/CD pipelines using GitHub Actions + GitOps. Configure and manage real-time streaming pipelines with Apache Kafka and GCP DataProc. Manage logging/observability infrastructure with Elasticsearch, Logstash, and Kibana (ELK stack). Secure and optimize GCP services including Artifact Registry, Compute Engine, Cloud Storage, VPC, and IAM. Implement caching and session stores using Redis for scalability. Monitor system health, availability, and performance with Prometheus, Grafana, ELK. Collaborate with engineering and QA teams to streamline deployments. Automate infrastructure provisioning using Terraform, Bash, or Python. Maintain backup, failover, and disaster recovery strategies for production. Qualifications Bachelor’s degree in Computer Science, Engineering, or related technical field. 4–8 years’ experience in DevOps, Cloud Infrastructure, or Site Reliability Engineering. Strong experience with Google Cloud Platform (GCP) including GKE, IAM, VPC, Artifact Registry, and DataProc. Hands-on with Kubernetes, Argo CD, and GitHub Actions for CI/CD workflows. Proficiency with Apache Kafka for real-time data streaming. Experience managing ELK Stack (Elasticsearch, Logstash, Kibana) in production. Working knowledge of Redis for distributed caching/session management. Skilled in scripting/automation with Bash, Python, Terraform. Strong understanding of containerization, infrastructure-as-code, and monitoring. Familiarity with cloud security, IAM policies, and compliance best practices.
Posted 2 days ago
5.0 years
0 Lacs
chennai, tamil nadu, india
On-site
C1X AdTech Pvt Ltd is a fast-growing product and engineering-driven AdTech company building next-generation advertising and marketing technology platforms. Our mission is to empower enterprise clients with the smartest marketing solutions, enabling seamless integration with personalization engines and delivering cross-channel marketing capabilities. We are dedicated to enhancing customer engagement and experiences while focusing on increasing Lifetime Value (LTV) through consistent messaging across all channels. Our world-class engineering team spans front end (UI), back end (Java/Node.js APIs), Big Data, and DevOps , working together to deliver high-performance and scalable AdTech products. Role Overview As a QA Engineer , you will ensure the quality, reliability, and performance of our platforms by designing and executing test strategies, identifying defects, and collaborating with cross-functional teams. You’ll validate RTB systems, integrations with SSP/DSP partners, and campaign delivery workflows , ensuring seamless experiences across our advertising platforms. Objectives Design and maintain scalable QA test suites (manual + automated) for complex backend systems and integrations. Ensure the robustness, performance, and reliability of RTB and AdTech platforms. Validate standards and integrations with top SSPs & DSPs to guarantee seamless delivery. Responsibilities Design and execute comprehensive test plans and test cases for backend systems, microservices, and integrations. Perform functional, regression, performance, and integration testing across backend and frontend services. Validate data integrity and accuracy for integrations using OpenRTB and related protocols. Own QA aspects of campaign implementation, monitoring, and reporting. Work closely with product and engineering teams to translate business requirements into test scenarios. Analyze large-scale logs and datasets to identify root causes and quality improvements. Troubleshoot campaign delivery and system issues in collaboration with internal and external partners. Conduct peer reviews of test strategies, automation scripts, and QA deliverables. Qualifications Bachelor’s degree in Computer Science, Engineering, or related field (problem-solving skills on LeetCode/HackerRank may substitute). Solid foundation in QA principles, testing methodologies, and defect lifecycle management. 3–5 years’ QA experience in digital advertising or B2B digital marketing industry. Familiarity with web technologies, RESTful APIs, and distributed systems. Hands-on with automation frameworks/tools such as Playwright, Selenium, TestNG, JUnit, REST Assured. Knowledge of performance testing tools (JMeter, Gatling) is a plus. Exposure to Kafka, ELK stack (Elasticsearch, Logstash, Kibana), and scripting in Java/JavaScript is advantageous. Understanding of microservices architecture and QA strategies for distributed systems is a plus.
Posted 2 days ago
67.0 years
0 Lacs
chandigarh, india
On-site
Job Title : Technical Project Manager Legal ERP & Research Operations Experience : 67 Years Role Overview We are looking for a Project Manager with deep technical expertise in Legal ERP platforms, workflow automation, and legal operations technology. This role requires hands-on experience with ERP system implementations, API integrations, legal tech solutions, and structured project management frameworks (Agile/Waterfall/Hybrid). The candidate will be responsible for managing a team of legal researchers while driving technology adoption, process optimization, and ERP-driven reporting frameworks to ensure operational efficiency in litigation, arbitration, and legal research functions. Key Responsibilities ERP & Technology Integration : Lead the implementation, customization, and optimization of Legal ERP systems for case and research management. Define system architecture, data models, and integration frameworks with existing legal databases and internal tools. Collaborate with IT and vendors to ensure seamless API/ETL integrations between ERP platforms, document management systems (DMS), and research databases (e.g., LexisNexis, SCC Online, Westlaw). Manage ERP workflows, automation scripts, and rule-based triggers to streamline documentation, case updates, and compliance checks. Project Management Apply Agile methodologies (Scrum/Kanban) for sprint-based deliverables while ensuring compliance-driven processes. Utilize tools such as Jira, MS Project, or Azure DevOps for backlog management, sprint planning, and real-time reporting. Perform risk assessment, RACI mapping, and dependency management to ensure project milestones are met. Data-Driven Legal Operations Establish data pipelines and dashboards for case tracking, SLA monitoring, and research productivity. Define and track KPIs such as documentation accuracy, turnaround time (TAT), and case resolution cycle time. Implement analytics-driven insights to optimize research output and improve team efficiency. Ensure version control, audit trails, and metadata tagging across all legal documents for compliance and traceability. Team & Quality Management Manage legal research analysts ensuring adherence to technical and quality standards. Conduct code/logic reviews of research automation workflows and ERP configurations. Drive adoption of knowledge management systems (KMS) with structured taxonomies and ontologies for legal data. Provide technical training on ERP modules, legal research tools, and process automation frameworks. Requirements Education : Bachelors degree in Computer Science, Information Technology, or Engineering (mandatory). Certifications: PMP / PRINCE2 / Agile Scrum Master / ITIL preferred. Experience 67 years in project management with at least 3+ years in Legal ERP / Legal Tech implementations. Strong track record in workflow automation, ERP customizations, and system integrations. Experience working within legal operations, arbitration, or litigation support environments. Technical Skills & Tools ERP Platforms: SAP Legal, Oracle Legal ERP, Mitratech, or equivalent legal ERP systems. Databases & Research Tools: SQL, ElasticSearch, LexisNexis, SCC Online, Westlaw. Project Management Tools: Jira, Confluence, MS Project, Trello, Asana. Automation & Scripting: Python, VBA, Power Automate, or low-code/no-code tools. Reporting & Analytics: Power BI, Tableau, or Qlik for dashboard creation. Collaboration & Compliance: SharePoint, DMS tools, ISO 27001/ GDPR compliance practices. (ref:hirist.tech)
Posted 2 days ago
10.0 years
0 Lacs
chandigarh, india
On-site
Job Title : Senior Data Architect Location : Bangalore/Chandigarh Job Type : Full-time Experience : 10+ years Job Summary are looking for an experienced Data Architect to lead the design, development, and optimization of our modern data infrastructure. The ideal candidate will have deep expertise in big data platforms, data lakes, lakehouse architectures, and hands-on experience with modern tools such as Spark clusters, PySpark, Apache Iceberg, the Nessie catalog, and Apache Airflow. This role will be pivotal in evolving our data platform, including database migrations, scalable data pipelines, and governance-ready architectures for both analytical and operational use cases. Key Responsibilities Design and implement scalable and reliable data architectures for real-time and batch processing systems Evaluate and recommend data tools, frameworks, and infrastructure aligned with company goals Develop and optimize complex ETL/ELT pipelines using PySpark and Apache Airflow Architect and manage data lakes using Spark on Apache Iceberg and Nessie catalog for versioned and governed data workflows Perform data analysis, data profiling, data quality improvements, and data modeling Lead database migration efforts, including planning, execution, and optimization Define and enforce data engineering best practices, data governance standards, and schema evolution strategies Collaborate cross-functionally with data scientists, analysts, platform engineers, and business Skills & Qualifications : 10+ years of experience in data architecture, data engineering, data security, data governance, and big data platforms Deep understanding of trade-offs between managed services and open-source data stack tools, including cost, scalability, operational overhead, flexibility, and vendor lock-in Strong hands-on experience with PySpark for writing data pipelines and distributed data processing Proven expertise with Apache Iceberg, Apache Hudi, and the Nessie catalog for modern table formats and versioned data catalogs Experience in scaling and managing Elasticsearch and PostgreSQL clusters Strong experience with Apache Airflow for workflow orchestration (or equivalent tools) Demonstrated success in database migration projects across multiple cloud providers Ability to perform deep data analysis and compare datasets between systems Experience handling 100s of terabytes of data or more Proficiency in SQL, data modeling, and performance tuning Excellent communication and presentation skills, with the ability to lead technical conversations Nice To Have Experience in Sales, Marketing, and CRM domains, especially with Accounts and Contacts data Knowledge in AI and vector databases. Exposure to streaming data frameworks (Kafka, Flink, etc.) Ability to support analytics and reporting initiatives Why Join Us Work on cutting-edge data architectures using modern open-source technologies Be part of a team transforming data operations and analytics at scale Opportunity to architect high-impact systems from the ground up Join a collaborative, innovation-driven culture (ref:hirist.tech)
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City