Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
3 - 5 Lacs
India
On-site
AWS Cloud Engineer with extensive experience of 2 years in designing available, cost-efficient, Fault-Tolerant and scalable distributed systems on AWS; exposure in AWS deployment and management services. Monitoring the deployments in environments, debugging deployment issues and resolving the same in timely manner reducing the downtime. Experience in AWS Cloud and DevOps Tools. Experienced working in AWS Infrastructure and its services like IAM, VPC, EC2, EBS, S3, ALB, NACL, Security Groups, Auto Scaling, RDS, SNS, EFS, CloudWatch, CloudFront. Good hands-on experience in IAC tool like Terraform, CloudFormation. Good Experience in source code management tool Git, Github and source control management concepts like Branches, Merges . Good Experience in automating CI CD pipeline using Jenkins tools. Good hands-on experience in Configuration Management tool like Ansible. Having experience in creating custom Docker Images using Docker file and pushing Docker Images to Docker Hub. Setting up Kubernetes Cluster using EKS and Kubeadm. Writing manifest files to create deployments and services for micro service applications. Configuring Persistent volumes (PVs), PVCs for persistent database environments. Managed Deployment, ReplicationSet, StatefullSet, AutoScaling fo r Kubernetes Clusters. Good Experience on ELK for Log Aggregation and Log monitoring. Implemented, maintained, monitored alarms and notifications for AWS services using Cloud Watch and SNS. Experienced in deploying and monitoring applications on various platforms and setting up life cycle policies to back data from AWS S3. Configured CloudWatch alarm rules for operational and performance metrics for AWS resources and applications. Provisioned AWS resources using AWS Management Console, Command line Interface (CLI) Planed, built, and configured network infrastructure within VPC and other components. Responsible for implementing and supporting of cloud-based infrastructure and its solutions. Launching and configuring EC2 Instance using AMIs (Linux) Created IAM users and Policies towards application access. Installing and configuring Apache web server in windows and Linux. Initiating alarms in CloudWatch service for monitoring the server’s performance, CPU Utilization, disk usage etc.to take recommended actions for better performance. Creating/Managing Instance Image/Snapshots/Managing Volumes. Setup/Managing VPC, Subnets, make connection between different availability zones. Monitor Access logs and Error logs in AWS Cloud watch. Configuring EFS to EC2 instances. Creating & Configuring Elastic Load Balancer to distribute the traffic. Administration of Jenkins server - Includes Setup of Jenkins, parameterized builds and Deployment automation. Experience in creating Jenkins jobs, plug-in installations, setting up distributed builds concept and other Jenkins administration activities. Experience in managing microservices application using docker and Kubernetes. Increasing EBS volume storage capacity using AWS EBS Volume features. Creating/Managing buckets on S3 and assigning access permissions Applications of software installations, troubleshooting and updating Build and release EC2 instance Amazon Linux for development and production environment. Moving EC2 logs into S3. Experience in S3 Versioning, Server access logging & Life cycle policies on S3Buckets. Creating & Maintaining user accounts, groups and permissions. Created SNS notifications for multiple services in AWS. Creating and attaching Elastic IP to EC2 instances Assigning access permissions for files and directories to users and groups. Creating and managing user accounts/groups, assigning Roles and policies using IAM Experience on AWS Cloud services like IAM, S3, VPC, EC2, CloudWatch, CloudFront, CloudTrail, Route53, EFS, AWS Auto Scaling, EBS, SNS, SES, SQS, KMS, RDS, Security groups, Lambda, ECS, EKS,Tag Editor and more. Involved in designing and developing Amazon EC2, Amazon S3, Amazon RDS, Lamnda and other services. Creating containers in docker, pulling images deployment. Creating networks, creating nodes and pods in Kubernetes. Deployments using Jenkins through CI/CD pipeline. Creating infrastructure using terraform. Responsible for designing and deploying best SCM processes and procedures. Responsible for branching , merging and resolving various conflicts arising in GIT. Setup/Created CI/CD pipeline in Jenkins and scheduling a job. Established complete Jenkins CI-CD pipeline and complete workflow of build and delivery pipelines. Involved in writing DockerFile to build customized DockerImage for creating Docker Container and pushing DockerImage to DockerHub. Creating and managed multiple containers using Kubernetes . And creating deployments using Yaml code. Used Kubernetes to Orchestrate the deployment, scaling and management of docker container. Experience with monitoring tools like Prometheus and Grafana. Responsible to establish complete pipeline work-flow starting from pulling source code from git repository till deploying end product into Kubernetes cluster. Managing infrastructure of client both Windows and Linux Creation of files and directories. Creating users and groups. Assigning access permissions for files and directories to users and group. Installing and managing Web Server. Installation of packages using YUM (HTTP, HTTPS) Monitoring system Performance of Disk utilization and CPU utilization Technical Skills Operating Systems: Linux, Cent OS, Ubuntu and Windows. AWS: EC2, VPC, S3, EBS, IAM, Load balancing, Autoscaling, CloudFormation, CloudWatch, CloudFront, SNS, EFS, Route-53 DevOps Tools: Git, Ansible, Chef, Docker, Jenkins, Kubernetes, Terraform. Scripting Languages: Shell, Python. Monitoring Tools: CloudWatch, Grafana, Prometheus. Job Types: Full-time, Permanent, Fresher Pay: ₹345,405.87 - ₹500,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Morning shift Rotational shift Supplemental Pay: Performance bonus Yearly bonus Work Location: In person Speak with the employer +91 8668118196
Posted 1 hour ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh About ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise a multi-year AI-platform vision and reference architecture that advances Amgen’s digital-transformation, cloud-modernisation and product-delivery objectives. Design and evolve foundational platform components —feature stores, model registry, experiment tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Establish modelling and algorithm-selection standards that span classical ML, tree-based ensembles, clustering, time-series, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques; advise product squads on choosing and operationalising the right algorithm for each use-case. Orchestrate the full delivery pipeline for AI solutions —pilot → regulated validation → production rollout → post-launch monitoring—defining stage-gates, documentation and sign-off criteria that meet GxP/CSV and global privacy requirements. Scale AI workloads globally by engineering autoscaling GPU/CPU clusters, distributed training, low-latency inference and cost-aware load-balancing, maintaining <100 ms P95 latency while optimising spend. Implement robust MLOps and release-management practices (CI/CD for models, blue-green & canary deployments, automated rollback) to ensure zero-downtime releases and auditable traceability. Embed responsible-AI and security-by-design controls —data privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Package reusable solution blueprints and APIs that enable product teams to consume AI capabilities consistently, cutting time-to-production by ≥ 50 %. Provide deep technical mentorship and architecture reviews to product squads, troubleshooting performance bottlenecks and guiding optimisation of cloud resources. Develop TCO models and FinOps practices, negotiate enterprise contracts for cloud/AI infrastructure and deliver continuous cost-efficiency improvements. Establish observability frameworks —metrics, distributed tracing, drift detection, SLA dashboards—to keep models performant, reliable and compliant at scale. Track emerging technologies and regulations (serverless GPUs, confidential compute, EU AI Act) and integrate innovations that maintain Amgen’s leadership in enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software. Comprehensive command of machine-learning algorithms—regression, tree-based ensembles, clustering, dimensionality reduction, time-series models, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques—with the judgment to choose, tune and operationalise the right method for a given business problem. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science and or Data Science Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 20 hours ago
8.0 years
0 Lacs
Goregaon, Maharashtra, India
On-site
Experience: 8+ years Location: Mumbai (Onsite) About the Role: We are looking for hands-on and automation-driven Cloud Engineers to join our DevOps team. You will be responsible for managing cloud infrastructure, CI/CD pipelines, containerized deployments, and ensuring platform stability and scalability across environments. Key Responsibilities: Design, build, and maintain secure and scalable infrastructure on AWS, Azure, or GCP. Set up and manage CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins. Manage Dockerized environments, ECS, EKS, or Kubernetes clusters for microservice-based deployments. Monitor and troubleshoot production and staging environments, ensuring uptime and performance. Work closely with developers to streamline release cycles and automate testing, deployments, and rollback procedures. Maintain infrastructure as code using Terraform or CloudFormation. What We’re Looking For: Strong knowledge of Linux system administration, networking, and cloud infrastructure (preferably AWS). Experience with Docker, Kubernetes, Nginx, and monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with Git, scripting (Shell/Python), and secrets management tools. Ability to debug infrastructure issues, logs, and deployments across cloud-native stacks. Bonus Points: Certification in AWS/GCP/Azure DevOps or SysOps. Exposure to security, cost optimization, and autoscaling setups. Work Mode: Onsite – Mumbai Why Join Us? Direct ownership over production-grade infrastructure Build systems that support AI, web apps, APIs, and real products Get early visibility into architecture, security, and scalability decisions Clear growth track: ACE → CE → SCE → SPCE → Cloud Architect / DevOps Lead
Posted 2 days ago
12.0 years
0 Lacs
India
On-site
Role Overview: We're seeking a senior Azure Infrastructure Engineer with 8–12 years of deep hands-on experience in building, deploying, and operating cloud-native infrastructure. You’ll be responsible for core components like AKS, Terraform, Docker, Helm, KEDA, HPA, Istio/service mesh, CI/CD pipelines, Azure networking, and disaster recovery. Key Responsibilities: Operate and troubleshoot production AKS clusters. – [Primary Skill -Expertise] Build and deploy workloads using Docker and Helm. – [ Secondary Skill- Need to know what to do] Automate infra provisioning with Terraform. – [Primary Skill -Expertise] Configure autoscaling using KEDA and HPA. –[Primary Skill -Expertise] Manage Istio or equivalent service mesh (ingress, routing, mTLS). – [ Low Priority Skill] Maintain robust CI/CD pipelines (Azure DevOps/GitHub Actions). –– [ Secondary Skill- Need to know what to do] Handle complex Azure networking (VNet, NSG, DNS, LB, peering).- –[Primary Skill -Expertise] Support and execute disaster recovery procedures. - –[Primary Skill -Expertise] Required Skills: 8–12 years in infrastructure/DevOps roles with deep expertise in. Azure, AKS, Docker, Terraform, Helm. KEDA, HPA, Istio/service mesh. CI/CD pipelines, Linux, Bash/PowerShell scripting. Azure networking and disaster recovery.
Posted 2 days ago
1.0 years
25 Lacs
Kochi, Kerala, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Greater Bhopal Area
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Visakhapatnam, Andhra Pradesh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Indore, Madhya Pradesh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Thiruvananthapuram, Kerala, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Chandigarh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Dehradun, Uttarakhand, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Mysore, Karnataka, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Patna, Bihar, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description We are looking for an experienced LLM Engineer with 3 to 6 years of software development experience and at least 1-2 years of hands-on expertise in building LLM-based solutions. The ideal candidate will have strong skills in Python, LLM development tools (RAG, Vector Databases, Agentic workflows, LoRA, etc.), cloud platforms (AWS, Azure, GCP), DevOps, and full-stack development. Responsibilities: Design and develop scalable, distributed software systems using modern architecture patterns. Lead the development of LLM-based applications using RAG, Vector DB, Agentic workflows, LoRA, QLoRA, and related technologies. Translate business requirements into technical solutions and drive technical execution. Build, deploy, and maintain LLM pipelines integrated with APIs and cloud platforms. Implement DevOps practices using Docker, Kubernetes, and automate CI/CD pipelines. Work with cloud services such as AWS, Azure, or GCP for deploying scalable applications. Collaborate with cross-functional teams, perform code reviews, and follow best engineering practices. Develop APIs and backend services using Python (FastAPI, Django) with secure authentication (JWT, Azure AD, IDM). Contribute to front-end development using ReactJS, NextJS, Tailwind CSS (preferred). Utilize LLM APIs (OpenAI, Anthropic, AWS Bedrock) and SDKs (LangChain, DSPy) for application development. Ensure application security, performance, scalability, and compliance with privacy standards. Follow Agile methodology for continuous development and delivery. Skills & Qualifications: Bachelor’s degree in Computer Science, IT, or related field (or equivalent experience). 5+ years of software development experience, including at least 1-2 years building LLM solutions. Proficient in Python and JavaScript. Strong experience with LLM patterns like RAG, Vector Databases, Hybrid Search, Agent development, Prompt Engineering, Agentic workflows, etc. Knowledge of API development using FastAPI, Django, and WebSockets, gRPC. Familiarity with access management using JWT, Azure AD, IDM. Hands-on experience with LLM APIs (OpenAI, Anthropic, AWS Bedrock) and SDKs (LangChain, DSPy). Experience with cloud platforms such as AWS, Azure, GCP including IAM, Monitoring, Load Balancing, Autoscaling, Networking, Database, Storage, ECR, AKS, ACR. Experience with DevOps tools: Docker, Kubernetes, CI/CD pipelines, automation scripts. Exposure to front-end frameworks like ReactJS, NextJS, Tailwind CSS (preferred). Experience deploying production-grade LLM applications for large user bases. Strong knowledge of software engineering practices (Git, version control, Agile/DevOps). Excellent communication skills with the ability to explain complex concepts clearly. Strong understanding of scalable system design, security best practices, and compliance standards. Familiarity with SDLC processes and Agile product development cycles.
Posted 2 days ago
1.0 years
25 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Surat, Gujarat, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Kolkata, West Bengal, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Cuttack, Odisha, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Guwahati, Assam, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
1.0 years
25 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane