Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 years
0 Lacs
India
Remote
About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: AWS Cloud Architecture Experience: 15+ Years Location: Any Infosys DC Cloud Application Principal Engineers having skill set of database development, architectural designing, implementation and performance tuning on both on-premise and cloud technologies. Mandatory Skills ✔ 15+ years in Java Full Stack (Spring Boot, Microservices, ReactJS) ✔ Cloud Architecture: AWS EKS, Kubernetes, API Gateway (APIGEE/Tyk) ✔ Event Streaming: Kafka, RabbitMQ ✔ Database Mastery: PostgreSQL (performance tuning, scaling) ✔ DevOps: GitLab CI/CD, Terraform, Grafana/Prometheus ✔ Leadership: Technical mentoring, decision-making About the Role We are seeking a highly experienced AWS Cloud Architect with 15+ years of expertise in full-stack Java development , cloud-native architecture, and large-scale distributed systems. The ideal candidate will be a technical leader capable of designing, implementing, and optimizing high-performance cloud applications across on-premise and multi-cloud environments (AWS). This role requires deep hands-on skills in Java, Microservices, Kubernetes, Kafka, and observability tools, along with a strong architectural mindset to drive innovation and mentor engineering teams. Key Responsibilities Cloud-Native Architecture & Leadership: Lead the design, development, and deployment of scalable, fault-tolerant cloud applications (AWS EKS, Kubernetes, Serverless). Define best practices for microservices, event-driven architecture (Kafka), and API management (APIGEE/Tyk). Architect hybrid cloud solutions (on-premise + AWS/GCP) with security, cost optimization, and high availability. Full-Stack Development: Develop backend services using Java, Spring Boot, and PostgreSQL (performance tuning, indexing, replication). Build modern frontends with ReactJS (state management, performance optimization). Design REST/gRPC APIs and event-driven systems (Kafka, SQS). DevOps & Observability: Manage Kubernetes (EKS) clusters, Helm charts, and GitLab CI/CD pipelines. Implement Infrastructure as Code (IaC) using Terraform/CloudFormation. Set up monitoring (Grafana, Prometheus), logging (ELK), and alerting for production systems. Database & Performance Engineering: Optimize PostgreSQL for high throughput, replication, and low-latency queries. Troubleshoot database bottlenecks, caching (Redis), and connection pooling. Design data migration strategies (on-premise → cloud). Mentorship & Innovation: Mentor junior engineers and conduct architecture reviews. Drive POCs on emerging tech (Service Mesh, Serverless, AI/ML integrations). Collaborate with CTO/Architects on long-term technical roadmaps.
Posted 1 day ago
7.0 years
0 Lacs
India
Remote
Senior DevOps (Azure, Terraform, Kubernetes) Engineer Location: Remote (Initial 2–3 months in Abu Dhabi office, and then remote from India) T ype: Full-time | Long-term | Direct Client Hire Client: Abu Dhabi Government About The Role Our client, UAE (Abu Dhabi) Government, is seeking a highly skilled Senior DevOps Engineer (with skills on Azure, Terraform, Kubernetes, Argo) to join their growing cloud and AI engineering team. This role is ideal for candidates with a strong foundation in cloud Azure DevOps practices. Key Responsibilities Design, implement, and manage CI/CD pipelines using tools such as Jenkins, GitHub Actions, or Azure DevOps, AKS Develop and maintain Infrastructure-as-Code using Terraform Manage container orchestration environments using Kubernetes Ensure cloud infrastructure is optimized, secure, and monitored effectively Collaborate with data science teams to support ML model deployment and operationalization Implement MLOps best practices, including model versioning, deployment strategies (e.g., blue-green), monitoring (data drift, concept drift), and experiment tracking (e.g., MLflow) Build and maintain automated ML pipelines to streamline model lifecycle management Required Skills 7+ years of experience in DevOps and/or MLOps roles Proficient in CI/CD tools: Jenkins, GitHub Actions, Azure DevOps Strong expertise in Terraform and cloud-native infrastructure (AWS preferred) Hands-on experience with Kubernetes, Docker, and microservices Solid understanding of cloud networking, security, and monitoring Scripting proficiency in Bash and Python Preferred Skills Experience with MLflow, TFX, Kubeflow, or SageMaker Pipelines Knowledge of model performance monitoring and ML system reliability Familiarity with AWS MLOps stack or equivalent tools on Azure/GCP Skills: argo,terraform,kubernetes,azure
Posted 1 day ago
15.0 years
0 Lacs
India
Remote
About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: AWS Cloud Architecture Experience: 15+ Years Location: Any PAN India - Hybrid work model Mandatory Skills ✔ 15+ years in Java Full Stack (Spring Boot, Microservices, ReactJS) ✔ Cloud Architecture: AWS EKS, Kubernetes, API Gateway (APIGEE/Tyk) ✔ Event Streaming: Kafka, RabbitMQ ✔ Database Mastery: PostgreSQL (performance tuning, scaling) ✔ DevOps: GitLab CI/CD, Terraform, Grafana/Prometheus ✔ Leadership: Technical mentoring, decision-making About the Role We are seeking a highly experienced AWS Cloud Architect with 15+ years of expertise in full-stack Java development , cloud-native architecture, and large-scale distributed systems. The ideal candidate will be a technical leader capable of designing, implementing, and optimizing high-performance cloud applications across on-premise and multi-cloud environments (AWS). This role requires deep hands-on skills in Java, Microservices, Kubernetes, Kafka, and observability tools, along with a strong architectural mindset to drive innovation and mentor engineering teams. Key Responsibilities ✅ Cloud-Native Architecture & Leadership: Lead the design, development, and deployment of scalable, fault-tolerant cloud applications (AWS EKS, Kubernetes, Serverless). Define best practices for microservices, event-driven architecture (Kafka), and API management (APIGEE/Tyk). Architect hybrid cloud solutions (on-premise + AWS/GCP) with security, cost optimization, and high availability. ✅ Full-Stack Development: Develop backend services using Java, Spring Boot, and PostgreSQL (performance tuning, indexing, replication). Build modern frontends with ReactJS (state management, performance optimization). Design REST/gRPC APIs and event-driven systems (Kafka, SQS). ✅ DevOps & Observability: Manage Kubernetes (EKS) clusters, Helm charts, and GitLab CI/CD pipelines. Implement Infrastructure as Code (IaC) using Terraform/CloudFormation. Set up monitoring (Grafana, Prometheus), logging (ELK), and alerting for production systems. ✅ Database & Performance Engineering: Optimize PostgreSQL for high throughput, replication, and low-latency queries. Troubleshoot database bottlenecks, caching (Redis), and connection pooling. Design data migration strategies (on-premise → cloud). ✅ Mentorship & Innovation: Mentor junior engineers and conduct architecture reviews. Drive POCs on emerging tech (Service Mesh, Serverless, AI/ML integrations). Collaborate with CTO/Architects on long-term technical roadmaps.
Posted 1 day ago
4.0 - 7.0 years
7 - 11 Lacs
Noida
Work from Office
Design, implement, and maintain data pipelines for processing large datasets, ensuring data availability, quality, and efficiency for machine learning model training and inference. Collaborate with data scientists to streamline the deployment of machine learning models, ensuring scalability, performance, and reliability in production environments. Develop and optimize ETL (Extract, Transform, Load) processes, ensuring data flow from various sources into structured data storage systems. Automate ML workflows using ML Ops tools and frameworks (e.g., Kubeflow, MLflow, TensorFlow Extended (TFX)). Ensure effective model monitoring, versioning, and logging to track performance and metrics in a production setting. Collaborate with cross-functional teams to improve data architectures and facilitate the continuous integration and deployment of ML models. Work on data storage solutions, including databases, data lakes, and cloud-based storage systems (e.g., AWS, GCP, Azure). Ensure data security, integrity, and compliance with data governance policies. Perform troubleshooting and root cause analysis on production-level machine learning systems. Skills: Glue, Pyspark, AWS Services, Strong in SQL; Nice to have : Redshift, Knowledge of SAS Dataset Mandatory Competencies DevOps - CLOUD AWS DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Docker ETL - ETL - AWS Glue Big Data - Big Data - Pyspark Database - Other Databases - Redshift Data Science and Machine Learning - Data Science and Machine Learning - Azure ML Beh - Communication and collaboration DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) Database - Sql Server - SQL Packages Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight DevOps/Configuration Mgmt - Cloud Platforms - AWS
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location : Hyderabad and Chennai Immediate Joiners Experience : 3 to 5 years Mandatory skills: MLOps, Model lifecycle + Python + PySpark + GCP (BigQuery, Dataproc & Airflow), And CI/CD Required Skills and Experience: Strong programming skills: Proficiency in languages like Python, with experience in libraries like TensorFlow, PyTorch, or scikit-learn. Cloud Computing: Deep understanding of GCP services relevant to ML, such as Vertex AI, BigQuery, Cloud Storage, Dataflow, Dataproc, and others. Data Science Fundamentals: Solid foundation in machine learning concepts, statistical analysis, and data modeling. Software Engineering Principles: Experience with software development best practices, version control (e.g., Git), and testing methodologies. MLOps: Familiarity with MLOps principles and practices. Data Engineering: Experience in building and managing data pipelines.
Posted 1 day ago
10.0 - 18.0 years
40 - 45 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Job Summary: We are looking for a visionary and hands-on Cloud Architect to join our Enterprise Architecture team. This role requires deep expertise in cloud-native architecture, AI-native infrastructure, and multi-cloud design. You will lead cloud transformation journeys, architect scalable and secure platforms, and integrate AI capabilities into enterprise ecosystems. Ideal for architects who thrive in fast-paced, customer-facing roles with a strong focus on innovation, governance, and modernization. Key Responsibilities: Design and implement cloud-native architectures using AWS, Azure, or GCP. Architect and integrate AI-native services (e.g., Vertex AI, Azure OpenAI, AWS SageMaker) into enterprise workloads. Define cloud adoption strategies, migration roadmaps, and cost-optimization frameworks. Ensure cloud security, compliance, and governance across infrastructure and services. Enable multi-cloud and hybrid-cloud deployment models, including data and service mesh architectures. Support pre-sales and proposal activities with architecture blueprints and solution demos. Mentor DevOps, SRE, and development teams on cloud best practices and AI service integration. Establish cloud observability strategies (monitoring, logging, alerting, performance tuning). Required Skills & Experience: Deep expertise in one or more cloud platforms (AWS, Azure, GCP) with certifications. Experience with Kubernetes, Docker, Terraform, and serverless architecture. Hands-on integration of AI/ML services in cloud environments. Knowledge of CI/CD, IaC, and DevSecOps practices. Understanding of cloud cost modeling, budgeting, and FinOps. Track record of leading large-scale cloud migration and transformation initiatives. Familiarity with security, IAM, and compliance standards in cloud ecosystems. 10+ years in architecture roles with 4+ years in cloud solution architecture. Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or related field. Preferred certifications: AWS Certified Solutions Architect Professional, Microsoft Azure Solutions Architect Expert, Google Professional Cloud Architect, and relevant AI/ML certifications.
Posted 1 day ago
3.0 years
35 - 55 Lacs
Visakhapatnam, Andhra Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 3500000-5500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Javelin) (*Note: This is a requirement for one of Uplers' client - Javelin) What do you need for this opportunity? Must have skills required: Next Js, react, LLM, Restful APIs, Rust, AWS, Go Lang, Python, SQL, Type Script, Vue Javelin is Looking for: Welcome to Javelin, a cutting-edge AI production platform designed for LLM-forward enterprises. It enables enterprises to leverage AI technology securely and reliably. Our large language models (LLMs) are powerful tools that offer a wide range of potential applications to add value to businesses. However, making these LLMs accessible to various teams and individuals in an organization presents security, cost management, and data handling challenges, including data leaks and intellectual property and PII/PHI risks. Javelin is a highly performant, ultra-low latency gateway written in Golang that serves as a centralized gateway for LLM interactions across the enterprise to address these challenges. It enables organizations to manage access to LLMs effectively and make them available for experimentation and production use cases while ensuring robust security and compliance. Javelin offers several features, including centralized management of LLM credentials and a simple routing framework to integrate with various closed and open-source LLMs. With Javelin, organizations can secure LLMs from development to production. Job Description Function: Software Engineering → Full-Stack Development GolangJavaScriptPythonFlaskNode.jsReact.js Javelin is building an AI Security Platform for LLMs. We're seeking an experienced full-stack engineer to join our startup. As a critical member of our small, fast-paced team, you will design, implement, and maintain our Go-based & Python APIs and infrastructure. We are a remote-first organization. World-class investors like Aspenwood and Mozilla Ventures fund Javelin. Responsibilities: Architect, develop and optimize Go and Python-based APIs and services. Collaborate with the team to design and evolve our system architecture. Implement best practices for code quality, testing, and documentation. Integrate with AWS & GCP services, Postgres databases, and Kubernetes deployments. Contribute to the entire development lifecycle, from ideation to deployment and maintenance. Mentor and guide junior team members. Requirements: 3+ years of professional experience in Go or Python development OR 7+ years of professional experience in other languages. Strong understanding of Go or Python best practices, concurrency patterns, and performance optimization. Expertise in designing and building RESTful APIs and microservices. Experience owning the full development cycle of a project from inception to production. Proficient in SQL and working with relational databases. Contributions to open-source projects, particularly LLM-related. Experience building security tools or products. Experience with Rust or TypeScript. Experience building LLM technologies and their applications. Experience building production web applications using a modern framework such as React, NextJS, Vue, or Svelte. Interview Rounds : R1 : Tech screen round R2 : Project/Take -home Assignment R3: Peer Coding Discussion R4: Design Discussion R5: Discussion with CEO How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Proven experience in managing and automating CI/CD pipelines using tools like Jenkins, Azure DevOps, or GitLab. Expertise in cloud platforms ( Azure, GCP) and experience with services like compute , storage , API gateways. Strong proficiency in containerization technologies (Docker) and orchestration tools (Kubernetes,). In-depth knowledge of monitoring, logging, and performance tuning using tools like . Experience managing and deploying microservices-based applications and ensuring high availability, scalability, and resilience. Familiarity with automation frameworks and scripting languages (Python, Bash, PowerShell)
Posted 1 day ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Solid understanding and working experience with GCP cloud platform - fundamentals: GCP (e.g.Vertex AI, AI Platform Pipelines, Cloud Functions, GKE, ADK) Python environment setup, dependency management (Ex. pip, conda ) and API Integrations(API Keys, OAuth) Exposure to NLP, machine learning, or data science projects. Awareness of prompt engineering principles and how LLMs (like GPT, Claude, or LLaMA) are used in real-world applications. Understanding of transformer architecture and how it powers modern NLP. Exposure to AI Code Assist tools.
Posted 1 day ago
8.0 - 12.0 years
25 - 40 Lacs
Noida, Gurugram
Hybrid
Position Overview: We are currently seeking a highly proficient architect/engineer with advanced expertise in cloud technologies( AWS, GCP, and Azure). The ideal candidate will have a deep understanding of automated deployments, managing infrastructure provisioning, and architecting cloud environments for optimized performance and security. Key Responsibilities: Multi-Cloud Architecture: Design and manage robust multi-cloud environments across AWS, GCP, and Azure, ensuring configurations are scalable, secure, and efficient (AWS is must and GCP/Azure is preferred) Infrastructure as Code (IaC): Utilize tools such as Terraform to implement infrastructure, creating reusable templates that standardize cloud deployments. CI/CD Pipelines: Develop and maintain robust CI/CD pipelines using GitHub Actions and Argo CD, enhancing the streamline of deployment processes. Container Orchestration: Oversee container orchestration via Kubernetes, managing containerized applications to ensure effective performance. Monitoring and Observability: Implement and enhance comprehensive monitoring, logging, and observability frameworks to proactively manage the operational health of deployments. Security and Compliance: Ensure all cloud configurations conform to the latest security protocols and compliance requirements. Collaboration and Documentation: Work closely with cross-functional teams to integrate DevOps practices into broader developmental and operational frameworks. Document architectural designs and operational procedures to ensure consistency and clarity in cloud management practices. Continuous Learning: Stay abreast of the latest advancements in cloud technologies, IaC, and DevOps tools to continuously refine and improve processes and infrastructure. Required Qualifications: Experience: 7+ years of relevant experience in Cloud Infrastructure, specifically focusing on architecture and DevOps methodologies. Cloud Expertise: Extensive knowledge and hands-on experience in architecting and managing multi-cloud environments across AWS, GCP, and Azure. IaC Proficiency: Skilled in using Terraform for writing declarative infrastructure as code, with significant experience in creating reusable templates. Container Management: Expertise in Kubernetes for deploying clusters and managing scalable containerized applications. CI/CD Tools: Strong experience in implementing CI/CD pipelines using GitHub Actions and Argo CD. Monitoring Tools: Demonstrated ability in deploying monitoring, logging, and observability tools in complex environments. Scripting Skills: Excellent scripting capabilities in Python, Bash, or other similar languages. Problem-Solving: Proven track record of troubleshooting and optimizing cloud deployments. Preferred Qualifications: Security and Compliance: Background in securing cloud environments and ensuring adherence to industry standards. Large-Scale Deployments: Experience in managing large-scale, high-availability cloud deployments. Advanced Networking: Knowledge of advanced networking concepts and security practices. Role & responsibilities Preferred candidate profile
Posted 1 day ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
We’re Hiring: Data Scraping Engineer 📍 Location: [Remote/City, Country] | 🕒 Full-Time Techno Province is looking for a skilled and detail-oriented Data Scraping Engineer to join our growing tech team! You’ll be responsible for designing and developing robust web scraping solutions that help fuel powerful travel insights and automation across global platforms. 🔧 What You’ll Do: Build and maintain scalable web scraping tools and crawlers Extract data from travel websites, APIs, and unstructured sources Clean, structure, and store scraped data in usable formats (JSON, CSV, DB) Monitor scraping pipelines and troubleshoot issues (captcha, IP blocks, etc.) Work with developers and data teams to integrate scraped data into products ✅ What We’re Looking For: 2–4 years of experience in web scraping / data extraction Strong Python skills (Scrapy, BeautifulSoup, Selenium, etc.) Understanding of HTTP, headers, sessions, proxies, and anti-bot mechanisms Familiarity with cloud deployment (AWS/GCP) and database storage Bonus: Experience with travel industry data or large-scale crawling systems 🚀 What You’ll Get: Opportunity to work on high-impact travel tech projects Flexible working hours (remote-friendly) Collaborative team environment Growth opportunities within a fast-moving company 👋 Think you're a fit? Drop us a message or send your resume to [connect@technoprovince.com] with the subject line: Data Scraping Engineer Application Let’s build smart systems together. #TechHiring #DataScraping #PythonJobs #TravelTech #JobOpening #HiringNow #TechnoProvince
Posted 1 day ago
0.0 - 4.0 years
1 - 5 Lacs
Maharashtra
Work from Office
Responsibilities : Manipulate and preprocess structured and unstructured data to prepare datasets for analysis and model training. Utilize Python libraries like PyTorch, Pandas, and NumPy for data analysis, model development, and implementation. Fine-tune large language models (LLMs) to meet specific use cases and enterprise requirements. Collaborate with cross-functional teams to experime ... nt with AI/ML models and iterate quickly on prototypes. Optimize workflows to ensure fast experimentation and deployment of models to production environments. Implement containerization and basic Docker workflows to streamline deployment processes. Write clean, efficient, and production-ready Python code for scalable AI solutions. Good to Have: Exposure to cloud platforms like AWS, Azure, or GCP. Knowledge of MLOps principles and tools. Basic understanding of enterprise Knowledge Management Systems. Ability to work against tight deadlines. Ability to work on unstructured projects independently. Strong initiative and self-motivated Strong Communication & Collaboration acumen. Required Skills: Proficiency in Python with strong skills in libraries like PyTorch, Pandas, and NumPy. Experience in handling both structured and unstructured datasets. Familiarity with fine-tuning LLMs and understanding of modern NLP techniques. Basics of Docker and containerization principles. Demonstrated ability to experiment, iterate, and deploy code rapidly in a production setting. Strong problem-solving mindset with attention to detail. Ability to learn and adapt quickly in a fast-paced, dynamic environment. What we Offer: Opportunity to work on cutting-edge AI technologies and impactful projects. A collaborative and growth-oriented work environment. Competitive compensation and benefits package. A chance to be a part of a team shaping the future of enterprise intelligence.
Posted 1 day ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Team Overview Join the Marketing Technologies Platform Team that powers billions of communications per day sent to customers across the world. This team plays a pivotal role in delivering personalized and timely customer engagement experiences across eBay's global user base. Role Overview We are seeking an enthusiastic and motivated Junior Entry Level ReactJS/NodeJS Front End Engineer to join our dynamic engineering team. This is an excellent opportunity for a recent graduate or an early-career professional to kickstart their career in web development. In this role, you will work under the guidance of senior developers, contributing to the development, testing, and maintenance of our ReactJS-based user interfaces and supporting Node.js backend services. You will gain hands-on experience with modern web technologies and best practices in a collaborative and supportive environment. Key Responsibilities Code Development: Write, test, and debug clean and efficient ReactJS components and Node.js code for new features and enhancements under supervision. Bug Fixing: Assist in identifying and resolving software defects and issues. Testing: Participate in unit testing and support integration testing to ensure code quality. Learning & Growth: Actively learn new technologies, tools, and best practices from senior team members and through self-study. Collaboration: Work closely with team members, including senior developers, UX/UI designers, and QA, to understand requirements and contribute to project goals. Documentation: Assist in creating and maintaining technical documentation. Code Review Participation: Learn from and contribute to code reviews. Support: Provide support for existing applications as needed. Qualifications Required Education: Bachelor's degree in Computer Science, Software Engineering, or a related technical field. Experience: 0-2 years of professional experience in web development, or strong academic project experience with ReactJS and Node.js. ReactJS Fundamentals: Foundational understanding of ReactJS concepts, including components, props, state, and basic hooks. Node.js Basics: Basic understanding of Node.js for server-side JavaScript, including basic API creation or consumption. Modern JavaScript/TypeScript: Solid understanding of modern JavaScript (ES6+) and familiarity with TypeScript. Web Technologies: Basic understanding of HTML5, CSS3, and responsive web design principles. API Interaction: Basic understanding of how to consume RESTful APIs. AI Code Generation: Familiarity with foundational AI concepts and practical experience applying AI-powered coding generation (e.g., OpenAI Codex, GitHub Copilot, Anthropic Claude, Cursor, Windsurf or understanding of transformer-based code generation) will be a significant asset. Testing: Familiarity with unit testing concepts. Version Control: Familiarity with Git. Problem Solving: Eagerness to learn and a basic aptitude for problem-solving. Communication: Good verbal and written communication skills and a willingness to ask questions. Preferred Exposure to state management libraries (e.g., Redux, Zustand) through coursework or personal projects. Familiarity with any build tool (e.g., Webpack, Vite). Any exposure to cloud platforms (e.g., AWS, Azure, GCP) or containerization (Docker). Understanding of Agile/Scrum methodologies. Basic understanding of performance optimization concepts for web applications. Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Overview Medallia is the pioneer and market leader in Experience Management. Our award-winning SaaS platform, Medallia Experience Cloud, leads the market in the management of experiences, insights, and actions for candidates, customers, employees, patients, and residents alike. We believe that every experience is a memory that can last a lifetime. Experiences shape the way people feel about a company. And they greatly influence how likely people are to advocate, contribute, and stay. At Medallia, we are committed to creating a world where organizations are loved by their customers and their employees. We empower exceptional people to create extraordinary experiences together. Bring your whole self. The Role and Team The Site Reliability Engineering organization at Medallia brings together the infrastructure and applications that power a highly reliable global SaaS platform. In particular, Application SREs own the reliability of different products and their infrastructure stack at Medallia, and ensure that they continue to scale with our rapidly-growing business. We are constantly growing our footprint to meet and exceed the demands in multiple geographical regions. Most of our applications work in K8s environments and we host them in Medallia Cloud, OCI, AWS, GCP and Azure. Our team is built of true professionals that leverage all benefits of SRE approaches. Engineers can build their careers and increase their professional weight with full support of Medallia. We are currently looking for a team player who has a passion for technological challenges and a high desire to learn, who embraces a dynamic environment, and who will help us scale out our existing infrastructure, tend to incidents, and deploy new cutting-edge tools. Please note, this role might require being on a rotating on-call shift which includes being available during evenings, weekends and holidays when scheduled. This role is based remotely in Pune. Candidates for this position are required to reside within the Pune metropolitan area. Relocation support is not available at this time. Responsibilities Educate application and infrastructure management about SRE approaches. Collaborate with product-engineering teams, build strong relationships and be ready to solve complex challenges together. Ensure applications and their infrastructure are updated and released at a defined pace. Build monitoring, automation and tooling around applications and related standard procedures, eliminate manual work. Troubleshoot complex problems that may span the full service stack. Ensure SLAs, proactively monitor and manage the availability of infrastructure and applications. Optimize performance of components across the full service. Be a part of the SRE team on-call rotation for escalations. Qualifications Minimum Qualifications 3+ years of experience with Site Reliability Engineering and/or related software development roles. Experience with: Building, configuring, and maintaining operational monitoring and reporting tools. Operations in on-premises and cloud environments. Incident management and change management. Complex information security concepts Demonstrated knowledge of: Linux OS and fundamental technologies like networking, DNS, Mail, IP filtering, etc. Scripting languages (Python, Bash, Groovy, Go, etc) Traditional web stack (frontend, API, application backend, caches, databases) Asynchronous and reliable application design (message queues, DB replicas, load balancing, auto-scaling, etc) Kubernetes deployments Release approaches (roll-out, canary, blue/green, etc) Preferred Qualifications Strong communication skills. Experience with: Infrastructure as Code tools (Ansible, Terraform, CloudFormation, etc) Relational DB’s such as: PostgreSQL NoSQL DB such as: Redis, MongoDB, Cassandra, BigQuery Messaging/Stream processing platform such as: Kafka CI/CD tools such as: Jenkins, ArgoCD AWS (EC2, S3, RDS, etc…) Jenkins pipelines Background working in heavily regulated industries such as banking, finance, or healthcare. At Medallia, we celebrate diversity and recognize the value it brings to our customers and employees. Medallia is proud to be an equal opportunity workplace and is an affirmative action employer. All qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity, national origin, genetic information, disability, veteran status, or any other applicable status protected by state or local law. Individuals with a disability who need an accommodation to apply please contact us at ApplicantAccessibility@medallia.com. For information regarding how Medallia collects and uses personal information, please review our Privacy Policies
Posted 1 day ago
10.0 years
14 - 20 Lacs
Thiruvananthapuram, Kerala, India
On-site
Industry & Sector A leading enterprise software services provider delivering end-to-end digital transformation and SaaS solutions across finance, healthcare, and telecommunications. You’ll join a high-velocity engineering team building cloud-native, data-driven web applications that power mission-critical business workflows. Role & Responsibilities Design & Build Full-Stack Solutions: Architect and implement scalable web applications using Core Java, Spring Boot, and Angular 2+ within a microservices framework. API Development & Integration: Develop RESTful APIs, integrate third-party services, and optimize data access layers with Hibernate/JPA. Database Management: Model, tune, and maintain relational schemas using PostgreSQL, SQL Server or MSSQL; author efficient SQL and stored procedures. UI/UX Engineering: Create responsive, accessible user interfaces leveraging Angular components, HTML5, and CSS best practices. Collaboration & Delivery: Participate in Agile ceremonies, manage sprints via JIRA, conduct code reviews, and ensure CI/CD pipeline health. Stakeholder Communication: Lead a daily client sync (7:00 PM–9:30 PM IST), gather requirements, demo features, and incorporate feedback into iterative releases. Must-Have Skills & Qualifications 7–10 years’ professional experience in full-stack development with strong expertise in Core Java and Spring Boot. Proven ability with Angular 2+ (Angular 14+ preferred), TypeScript, and modern front-end tooling. Hands-on experience with relational databases (PostgreSQL, SQL Server/MSSQL) and ORM frameworks (Hibernate/JPA). Solid understanding of MVC architecture, REST principles, and security best practices. Familiarity with build tools (Maven/Gradle), version control (Git), and containerization (Docker). Excellent verbal and written communication, with ability to engage clients and translate technical concepts. Preferred Experience in high-availability, cloud-deployed environments (AWS, Azure, or GCP). Background in performance tuning, caching strategies, and event-driven architectures. Prior work in regulated industries (finance, healthcare) or large-scale enterprise deployments. Skills: gradle,css,jpa,sql server,angular,html5,maven,java,git,mssql,core java,typescript,spring boot,angular 2+,hibernate,postgresql,docker
Posted 1 day ago
3.0 years
12 - 18 Lacs
Pune, Maharashtra, India
On-site
Company Overview Copods is an experience focussed, digital product design and fullstack engineering services company. We are on a steep growth trajectory with global partnerships. Our goal is to shape practical and meaningful human-centric experiences that are desirable, feasible, and viable. Role and Responsibilities Frontend Develop robust and maintainable web applications using modern JavaScript frameworks like Angular, React, Vue, Svelte, Solid JS, etc. Ensure pixel-perfect UI/UX implementation and contribute to the design process. Optimize applications for performance and scalability. Backend Build scalable and secure APIs and backend services using technologies like Node JS, Django, Rust, Go Lang, etc. Develop well-designed databases and data models. Implement efficient algorithms and data structures. Collaboration and Communication Work closely with product managers, designers, and other engineers to gather requirements and scope out projects. Provide technical leadership and mentorship to junior developers. Ensure excellent communication with team members and stakeholders. Minimum Qualifications Minimum of 3 years of professional experience in full-stack development. Proficiency in at least one modern JavaScript framework (Angular, React, Vue, Svelte, Solid JS, etc.) Experience in at least one backend technology (Node JS, Django, Rust, Go Lang, etc.) A keen eye for pixel-perfect UI and a strong understanding of UI/UX principles. Familiarity with containerization technologies like Docker. Excellent communication skills, both verbal and written. Good to have Experience in DevOps and CI/CD pipelines. Experience with cloud platforms like AWS, Azure, or GCP. Experience in Rust Strong knowledge of web accessibility and internationalization. Benefits Competitive Salary Flexible Work Hours Career Development Opportunities Skills: angular,devops and ci/cd pipelines,internationalization,rust,apis and backend services,cloud platforms (aws, azure, gcp),nodejs,containerization technologies (docker),modern javascript frameworks (angular, react, vue, svelte, solid js),ui/ux principles,web accessibility,algorithms and data structures,database design,backend technologies (node js, django, rust, go lang)
Posted 1 day ago
5.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Company Description ACC is an AWS Advance Partner with AWS Mobility Competency. Awarded The Best BFSI industry Consulting Partner for the year 2019, ACC has had several successful cloud migration and application development projects to its credit. Our business offerings include Digitalization, Cloud Services, Product Engineering, Big Data and analytics and Cloud Security. ACC has developed several products to its credit. These include Ottohm – Enterprise Video and OTT Platform, Atlas API – API Management and Development Platform, Atlas CLM – Cloud Life Cycle Management, Atlas HCM – HR Digital Onboarding and Employee Management, Atlas ITSM – Vendor Onboarding and Service Management and Smart Contracts – Contract Automation and Management. Website - http://www.appliedcloudcomputing.com/ Job Description Experience: 5+ years overall in Cloud Operations, including: Minimum 5 years of hands-on experience with Google Cloud Platform (GCP) Minimum 3 years of experience in Kubernetes administration Certifications: ✅ GCP Certified Professional – Mandatory Work Hours: 24x7 support coverage Rotational shifts (including night and weekend shifts) Key Responsibilities Manage and monitor GCP infrastructure resources, ensuring optimal performance, availability, and security. Administer Kubernetes clusters: deployment, scaling, upgrades, patching, and troubleshooting. Implement and maintain automation for provisioning, scaling, and monitoring using tools like Terraform, Helm, or similar. Respond to incidents, perform root cause analysis, and drive issue resolution within SLAs. Configure logging, monitoring, and alerting solutions across GCP and Kubernetes environments. Support CI/CD pipelines and integrate Kubernetes deployments with DevOps processes. Maintain detailed documentation of processes, configurations, and runbooks. Work collaboratively with Development, Security, and Architecture teams to ensure compliance and best practices. Participate in an on-call rotation and respond promptly to critical alerts. Required Skills & Qualifications GCP Certified Professional (Cloud Architect, Cloud Engineer, or equivalent). Strong working knowledge of GCP services (Compute Engine, GKE, Cloud Storage, IAM, VPC, Cloud Monitoring, etc.). Solid experience in Kubernetes cluster administration (setup, scaling, upgrades, security hardening). Proficiency with Infrastructure as Code tools (Terraform, Deployment Manager). Knowledge of containerization concepts and tools (Docker). Experience in monitoring and observability (Prometheus, Grafana, Stackdriver). Familiarity with incident management and ITIL processes. Ability to work in 24x7 operations with rotating shifts. Strong troubleshooting and problem-solving skills. Preferred Skills (Nice To Have) Experience supporting multi-cloud environments. Scripting skills (Python, Bash, Go). Exposure to other cloud platforms (AWS, Azure). Familiarity with security controls and compliance frameworks.
Posted 1 day ago
3.0 years
35 - 55 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 3500000-5500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Javelin) (*Note: This is a requirement for one of Uplers' client - Javelin) What do you need for this opportunity? Must have skills required: Next Js, react, LLM, Restful APIs, Rust, AWS, Go Lang, Python, SQL, Type Script, Vue Javelin is Looking for: Welcome to Javelin, a cutting-edge AI production platform designed for LLM-forward enterprises. It enables enterprises to leverage AI technology securely and reliably. Our large language models (LLMs) are powerful tools that offer a wide range of potential applications to add value to businesses. However, making these LLMs accessible to various teams and individuals in an organization presents security, cost management, and data handling challenges, including data leaks and intellectual property and PII/PHI risks. Javelin is a highly performant, ultra-low latency gateway written in Golang that serves as a centralized gateway for LLM interactions across the enterprise to address these challenges. It enables organizations to manage access to LLMs effectively and make them available for experimentation and production use cases while ensuring robust security and compliance. Javelin offers several features, including centralized management of LLM credentials and a simple routing framework to integrate with various closed and open-source LLMs. With Javelin, organizations can secure LLMs from development to production. Job Description Function: Software Engineering → Full-Stack Development GolangJavaScriptPythonFlaskNode.jsReact.js Javelin is building an AI Security Platform for LLMs. We're seeking an experienced full-stack engineer to join our startup. As a critical member of our small, fast-paced team, you will design, implement, and maintain our Go-based & Python APIs and infrastructure. We are a remote-first organization. World-class investors like Aspenwood and Mozilla Ventures fund Javelin. Responsibilities: Architect, develop and optimize Go and Python-based APIs and services. Collaborate with the team to design and evolve our system architecture. Implement best practices for code quality, testing, and documentation. Integrate with AWS & GCP services, Postgres databases, and Kubernetes deployments. Contribute to the entire development lifecycle, from ideation to deployment and maintenance. Mentor and guide junior team members. Requirements: 3+ years of professional experience in Go or Python development OR 7+ years of professional experience in other languages. Strong understanding of Go or Python best practices, concurrency patterns, and performance optimization. Expertise in designing and building RESTful APIs and microservices. Experience owning the full development cycle of a project from inception to production. Proficient in SQL and working with relational databases. Contributions to open-source projects, particularly LLM-related. Experience building security tools or products. Experience with Rust or TypeScript. Experience building LLM technologies and their applications. Experience building production web applications using a modern framework such as React, NextJS, Vue, or Svelte. Interview Rounds : R1 : Tech screen round R2 : Project/Take -home Assignment R3: Peer Coding Discussion R4: Design Discussion R5: Discussion with CEO How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
3.0 years
35 - 55 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 3500000-5500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Javelin) (*Note: This is a requirement for one of Uplers' client - Javelin) What do you need for this opportunity? Must have skills required: Next Js, react, LLM, Restful APIs, Rust, AWS, Go Lang, Python, SQL, Type Script, Vue Javelin is Looking for: Welcome to Javelin, a cutting-edge AI production platform designed for LLM-forward enterprises. It enables enterprises to leverage AI technology securely and reliably. Our large language models (LLMs) are powerful tools that offer a wide range of potential applications to add value to businesses. However, making these LLMs accessible to various teams and individuals in an organization presents security, cost management, and data handling challenges, including data leaks and intellectual property and PII/PHI risks. Javelin is a highly performant, ultra-low latency gateway written in Golang that serves as a centralized gateway for LLM interactions across the enterprise to address these challenges. It enables organizations to manage access to LLMs effectively and make them available for experimentation and production use cases while ensuring robust security and compliance. Javelin offers several features, including centralized management of LLM credentials and a simple routing framework to integrate with various closed and open-source LLMs. With Javelin, organizations can secure LLMs from development to production. Job Description Function: Software Engineering → Full-Stack Development GolangJavaScriptPythonFlaskNode.jsReact.js Javelin is building an AI Security Platform for LLMs. We're seeking an experienced full-stack engineer to join our startup. As a critical member of our small, fast-paced team, you will design, implement, and maintain our Go-based & Python APIs and infrastructure. We are a remote-first organization. World-class investors like Aspenwood and Mozilla Ventures fund Javelin. Responsibilities: Architect, develop and optimize Go and Python-based APIs and services. Collaborate with the team to design and evolve our system architecture. Implement best practices for code quality, testing, and documentation. Integrate with AWS & GCP services, Postgres databases, and Kubernetes deployments. Contribute to the entire development lifecycle, from ideation to deployment and maintenance. Mentor and guide junior team members. Requirements: 3+ years of professional experience in Go or Python development OR 7+ years of professional experience in other languages. Strong understanding of Go or Python best practices, concurrency patterns, and performance optimization. Expertise in designing and building RESTful APIs and microservices. Experience owning the full development cycle of a project from inception to production. Proficient in SQL and working with relational databases. Contributions to open-source projects, particularly LLM-related. Experience building security tools or products. Experience with Rust or TypeScript. Experience building LLM technologies and their applications. Experience building production web applications using a modern framework such as React, NextJS, Vue, or Svelte. Interview Rounds : R1 : Tech screen round R2 : Project/Take -home Assignment R3: Peer Coding Discussion R4: Design Discussion R5: Discussion with CEO How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
3.0 years
35 - 55 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 3500000-5500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Javelin) (*Note: This is a requirement for one of Uplers' client - Javelin) What do you need for this opportunity? Must have skills required: Next Js, react, LLM, Restful APIs, Rust, AWS, Go Lang, Python, SQL, Type Script, Vue Javelin is Looking for: Welcome to Javelin, a cutting-edge AI production platform designed for LLM-forward enterprises. It enables enterprises to leverage AI technology securely and reliably. Our large language models (LLMs) are powerful tools that offer a wide range of potential applications to add value to businesses. However, making these LLMs accessible to various teams and individuals in an organization presents security, cost management, and data handling challenges, including data leaks and intellectual property and PII/PHI risks. Javelin is a highly performant, ultra-low latency gateway written in Golang that serves as a centralized gateway for LLM interactions across the enterprise to address these challenges. It enables organizations to manage access to LLMs effectively and make them available for experimentation and production use cases while ensuring robust security and compliance. Javelin offers several features, including centralized management of LLM credentials and a simple routing framework to integrate with various closed and open-source LLMs. With Javelin, organizations can secure LLMs from development to production. Job Description Function: Software Engineering → Full-Stack Development GolangJavaScriptPythonFlaskNode.jsReact.js Javelin is building an AI Security Platform for LLMs. We're seeking an experienced full-stack engineer to join our startup. As a critical member of our small, fast-paced team, you will design, implement, and maintain our Go-based & Python APIs and infrastructure. We are a remote-first organization. World-class investors like Aspenwood and Mozilla Ventures fund Javelin. Responsibilities: Architect, develop and optimize Go and Python-based APIs and services. Collaborate with the team to design and evolve our system architecture. Implement best practices for code quality, testing, and documentation. Integrate with AWS & GCP services, Postgres databases, and Kubernetes deployments. Contribute to the entire development lifecycle, from ideation to deployment and maintenance. Mentor and guide junior team members. Requirements: 3+ years of professional experience in Go or Python development OR 7+ years of professional experience in other languages. Strong understanding of Go or Python best practices, concurrency patterns, and performance optimization. Expertise in designing and building RESTful APIs and microservices. Experience owning the full development cycle of a project from inception to production. Proficient in SQL and working with relational databases. Contributions to open-source projects, particularly LLM-related. Experience building security tools or products. Experience with Rust or TypeScript. Experience building LLM technologies and their applications. Experience building production web applications using a modern framework such as React, NextJS, Vue, or Svelte. Interview Rounds : R1 : Tech screen round R2 : Project/Take -home Assignment R3: Peer Coding Discussion R4: Design Discussion R5: Discussion with CEO How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Role and Responsibilities Developing a revolutionary finance marketplace product that includes design, user experience, and business logic to ensure the product is easy to use, appealing, and effective. Lead multiple high-performance engineering teams, defining and ensuring adherence to processes. Work closely with the Product Manager and Designer to ideate the product build. Coordinate with Architects to ensure tech alignment Participate in code and design reviews, establishing best software design and development practices. Mentor junior engineers and foster innovation within the team. Design and develop the pod’s software components and systems. Evaluate and recommend tools, technologies, and processes, driving adoption to ensure high-quality products. Participate in technical hiring activities to attract top talent. Requirements Minimum 7+ years of experience in full stack development, delivering enterprise-class web and mobile applications and services. Expertise in Java technologies including Spring, Hibernate, and Kafka. Proven experience in designing scalable applications capable of handling millions of transactions. Strong knowledge of NoSQL and RDBMS, with expertise in schema design and handling large volumes of data. Experience with Kubernetes deployment and managing CI/CD pipelines. Ability to function effectively in a fast-paced environment and manage continuously changing business needs. A strong advocate of code craftsmanship, adhering to good coding standards, and utilising tools to improve code quality. Experience with microservices architecture and RESTful APIs. Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK stack). Competent in software engineering tools (e.g., Java build tools) and best practices (e.g., unit testing, test automation, continuous integration). Experience with the Cloud technologies of AWS and GCP and developing secure applications Proven experience in leading engineering teams and managing projects. Strong understanding of the software development lifecycle and agile methodologies.
Posted 1 day ago
0.0 - 1.0 years
0 - 0 Lacs
Science City, Ahmedabad, Gujarat
On-site
Job Description: The DevOps Engineer is responsible for supporting deployment pipelines, managing cloud and server infrastructure, and ensuring smooth and secure delivery of applications. The role involves close collaboration with development teams to automate processes, deploy applications, and maintain systems in both cloud and on-premises environments. Key Responsibilities: Assist in building and maintaining CI/CD pipelines for automated deployments. Provision and manage servers on AWS, GCP, Linode, and cPanel environments. Deploy and configure applications written in Node.js, Next.js, Laravel, Angular, React.js, and WordPress. Manage DNS settings, configure SMTP, and set up SSL certificates (including Let’s Encrypt). Monitor and maintain server security, including managing Security Groups, firewall rules, and user access. Support web servers such as Apache and Nginx on Linux (Ubuntu) systems. Collaborate with developers to troubleshoot deployment issues and improve automation workflows. Use version control systems like Git, with basic knowledge of GitHub Actions for automating tasks. Maintain documentation for deployment processes, server configurations, and best practices. Required Skills and Qualifications: 1–2 years of experience in DevOps or system administration. Experience with cloud platforms like AWS, GCP, and Linode. Hands-on experience deploying web applications in various languages and frameworks. Basic understanding of Git and CI/CD tools (e.g., Bitbucket Pipelines, GitHub Actions). Knowledge of DNS management, SMTP configuration, and SSL/TLS setup . Understanding of Linux server administration, including security groups and firewall configuration. Good problem-solving skills and ability to work collaboratively with development teams. Optional / Nice to Have: Experience with Docker or other containerization tools. Basic scripting knowledge (e.g., Bash). Exposure to monitoring tools. Job Type: Full-time Pay: ₹30,000.00 - ₹35,000.00 per month Ability to commute/relocate: Science City, Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Experience: DevOps: 1 year (Required) AWS: 1 year (Required) Work Location: In person Speak with the employer +91 8320416203
Posted 1 day ago
10.0 - 14.0 years
25 - 30 Lacs
Pune
Work from Office
We are seeking a highly experienced Principal Solution Architect to lead the design, development, and implementation of sophisticated cloud-based data solutions for our key clients. The ideal candidate will possess deep technical expertise across multiple cloud platforms (AWS, Azure, GCP), data architecture paradigms, and modern data technologies. You will be instrumental in shaping data strategies, driving innovation through areas like GenAI and LLMs, and ensuring the successful delivery of complex data projects across various industries. Key Responsibilities: Solution Design & Architecture: Lead the architecture and design of robust, scalable, and secure enterprise-grade data solutions, including data lakes, data warehouses, data mesh, and real-time data pipelines on AWS, Azure, and GCP. Client Engagement & Pre-Sales: Collaborate closely with clients to understand their business challenges, translate requirements into technical solutions, and present compelling data strategies. Support pre-sales activities, including proposal development and solution demonstrations Data Strategy & Modernization: Drive data and analytics modernization initiatives, leveraging cloud-native services, Big Data technologies, GenAI, and LLMs to deliver transformative business value Industry Expertise: Apply data architecture best practices across various industries (e.g., BFSI, Retail, Supply Chain, Manufacturing) Requireme ntsRequired Qualifications & Skills Experience: 10+ years of experience in IT, with a significant focus on data architecture, solution architecture, and data engineering. Proven experience in a principal-level or lead architect role Cloud Expertise: Deep, hands-on experience with major cloud platforms: Azure: (Microsoft Fabric, Data Lake, Power BI, Data Factory, Azure Purview ), good understanding of Azure Service Foundry, Agentic AI, copi lotGCP: (Big Query, Vertex.AI, Gemini) Data Science Leadership: Understanding and experience in integrating AI/ML capabilities, including GenAI and LLMs, into data solutions Leadership & Communication: Exceptional communication, presentation, and interpersonal skills. Proven ability to lead technical teams and manage client relationships Problem-Solving: Strong analytical and problem-solving abilities with a strategic minds Education: Bachelors or masters degree in computer science, Engineering, Information Technology, or a related field Preferred Qualifications Relevant certifications in AWS, Azure, GCP, Snowflake, or Databricks Experience with Agentic AI, hyper-intelligent automation
Posted 1 day ago
3.0 - 8.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
TCS HIRING !!! Role : Data Scientist Required Technical Skill Set : Data Science Experience: 3-8 years Locations: Kolkata,Hyd,Bangalore,Chennai,Pune Job Description: Must-Have** (Ideally should not be more than 3-5) Proficiency in Python or R for data analysis and modeling. Strong understanding of machine learning algorithms (regression, classification, clustering, etc.). Experience with SQL and working with relational databases. Hands-on experience with data wrangling, feature engineering, and model evaluation techniques. Experience with data visualization tools like Tableau, Power BI, or matplotlib/seaborn. Strong understanding of statistics and probability. Ability to translate business problems into analytical solutions. Good-to-Have Experience with deep learning frameworks (TensorFlow, Keras, PyTorch). Knowledge of big data platforms (Spark, Hadoop, Databricks). Experience deploying models using MLflow, Docker, or cloud platforms (AWS, Azure, GCP). Familiarity with NLP, computer vision, or time series forecasting. Exposure to MLOps practices for model lifecycle management. Understanding of data privacy and governance concepts.
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Devops Engineer : Bangalore Job Description DevOps Engineer_Qilin Lab Bangalore, India Role Role We are seeking an experienced DevOps Engineer to deliver insights from massive-scale data in real time. Specifically, were searching for someone who has fresh ideas and a unique viewpoint, and who enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences for every of this role : Work with DevOps to run the production environment by monitoring availability and taking a holistic view of system health Build software and systems to manage our Data Platform infrastructure Improve reliability, quality, and time-to-market of our Global Data Platform Measure and optimize system performance and innovate for continual improvement Provide operational support and engineering for a distributed Platform at : Define, publish and defend service-level objectives (SLOs) Partner with data engineers to improve services through rigorous testing and release procedures Participate in system design, Platform management and capacity planning Create sustainable systems and services through automation and automated run-books Proactive approach to identifying problems and seeking areas for improvement Mentor the team in infrastructure best : Bachelors degree in Computer Science or an IT related field, or equivalent practical experience with a proven track record. The following hands-on working knowledge and experience is required : Kubernetes , EC2 , RDS,ELK Stack, Cloud Platforms (AWS, Azure, GCP) preferably AWS. Building & operating clusters Related technologies such as Containers, Helm, Kustomize, Argocd Ability to program (structured and OOP) using at least one high-level language such as Python, Java, Go, etc. Agile Methodologies (Scrum, TDD, BDD, etc.) Continuous Integration and Continuous Delivery Tools (gitops) Terraform, Unix/Linux environments Experience with several of the following tools/technologies is desirable : Big Data platforms (eg. Apache Hadoop and Apache Spark)Streaming Technologies (Kafka, Kinesis, etc.) ElasticSearch Service, Mesh Orchestration technologies, e.g., Argo Knowledge of the following is a plus : Security (OWASP, SIEM, etc.)Infrastructure testing (Chaos, Load, Security), Github, Microservices architectures. Notice period : Immediate to 15 days Experience : 3 to 5 years Job Type : Full-time Schedule : Day shift Monday to Friday Work Location : On Site Job Type : Payroll Must Have Skills Python - 3 Years - Intermediate DevOps - 3 Years - Intermediate AWS - 2 Years - Intermediate Agile Methodology - 3 Years - Intermediate Kubernetes - 3 Years - Intermediate ElasticSearch - 3 Years - Intermediate (ref:hirist.tech)
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France