Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 years
25 Lacs
Cuttack, Odisha, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Guwahati, Assam, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Amritsar, Punjab, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Greater Lucknow Area
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Thane, Maharashtra, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Nashik, Maharashtra, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 years
25 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
0.0 years
0 Lacs
Chandigarh, Chandigarh
On-site
Location - Onsite in Chandigarh About the Role We are looking for a backend-focused Full Stack Engineer to join our engineering team. This role emphasizes backend architecture, API design, and systems integration, while retaining the ability to contribute to frontend work as needed. You'll play a key role in developing secure, scalable infrastructure with integrated AI/NLP components, audit logging, and compliance features. Key Responsibilities Design and build scalable backend systems using Rust or Python (FastAPI) and Java Develop and document RESTful APIs aligned with OpenAPI specifications Integrate advanced AI/NLP components such as LLaMA 3, LangChain, spaCy, and NLTK Collaborate on data processing pipelines and GPU-accelerated workloads using NVIDIA H100s Implement robust audit logging and compliance tracking tools Contribute to frontend development as needed using React.js and TypeScript Required Skills & Qualifications 5+ years of experience in backend development with Rust, Python (FastAPI) and Java Strong understanding of RESTful APIs, OpenAPI standards, and secure API design Experience integrating AI/NLP tools into backend systems Solid foundation in system architecture, performance optimization, and secure coding practices Exposure to frontend technologies: React.js, TypeScript, and UI frameworks like Material-UI / Ant Design Experience with data visualization libraries such as D3.js, Chart.js, or Leaflet.js Preferred Qualifications Experience with GPU-based AI workflows (e.g., NVIDIA H100s) Familiarity with LangChain, LLM-based frameworks, or RAG pipelines Knowledge of compliance systems, audit trail design, or security-first development Why Join Us Build with Purpose: Work on impactful, high-scale products that solve real problems using cutting-edge technologies. Tech-First Culture: Join a team where engineering is at the core — we prioritize clean code, scalability, automation, and continuous learning. Freedom to Innovate: You’ll have ownership from day one — with room to experiment, influence architecture, and bring your ideas to life. Collaborate with the Best: Work alongside passionate engineers, product thinkers, and designers who value clarity, speed, and technical excellence. Paladin Tech is an equal opportunity employer. We are committed to creating an inclusive and diverse workplace and welcome candidates of all backgrounds and identities. Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Viva Learning is on a mission to empower employees with personalized, integrated learning experiences. As part of our continued investment in secure and scalable learning solutions, we are seeking a Software Engineer II to join our team. This role will focus on strengthening our security posture across data pipelines, telemetry systems, and compliance workflows, especially in response to evolving SFI (Security Future Initiative) requirements and internal security reviews. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Implement and drive security improvements across Viva Learning’s data export and telemetry systems, ensuring compliance with Microsoft’s internal security standards and external regulatory requirements Collaborate with engineering and PM teams to address security consult feedback, including remediation of identified gaps and implementation of best practices Own the security review lifecycle for new features and infrastructure changes, including threat modeling, secure design reviews, and privacy assessments Develop and maintain secure data handling processes. Partner with stakeholders across engineering, compliance, and privacy to ensure timely delivery of SFI wave asks and audit readiness Contribute to the development of automation and tooling to streamline security validation and reporting Qualifications Required Qualifications: 5+ years of experience in identifying security vulnerabilities, software development lifecycle, large-scale computing, modeling, cyber security, and anomaly detection 5+ years of experience with coding or scripting in languages such as C#, Python, C++, Go, PowerShell, .NET, Rust, or other comparable programming languages Strong understanding of identity and access management concepts, including OAuth, Entra applications, authentication and authorization flows, and service principal configurations Good understanding of secure software development practices, including threat modeling, secure coding, and vulnerability remediation. Knowledge of data governance, privacy regulations (e.g., GDPR), and secure data export practices Experience with cloud platforms (preferably Azure), data pipelines, and telemetry systems Familiarity with Microsoft’s internal security and compliance frameworks (e.g., SDL, SFI) is a plus Excellent collaboration and communication skills, with a track record of working across cross-functional teams Preferred Qualifications Experience working on enterprise SaaS products or learning platforms Proficiency in scripting or automation for security validation (e.g., PowerShell, Python) #DPG #EXP #Viva Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 day ago
5.0 years
0 Lacs
Hyderābād
On-site
Security Software Engineer II Hyderabad, Telangana, India Date posted Jun 26, 2025 Job number 1835462 Work site Microsoft on-site only Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Viva Learning is on a mission to empower employees with personalized, integrated learning experiences. As part of our continued investment in secure and scalable learning solutions, we are seeking a Software Engineer II to join our team. This role will focus on strengthening our security posture across data pipelines, telemetry systems, and compliance workflows, especially in response to evolving SFI (Security Future Initiative) requirements and internal security reviews. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: 5+ years of experience in identifying security vulnerabilities, software development lifecycle, large-scale computing, modeling, cyber security, and anomaly detection 5+ years of experience with coding or scripting in languages such as C#, Python, C++, Go, PowerShell, .NET, Rust, or other comparable programming languages Strong understanding of identity and access management concepts, including OAuth, Entra applications, authentication and authorization flows, and service principal configurations Good understanding of secure software development practices, including threat modeling, secure coding, and vulnerability remediation. Knowledge of data governance, privacy regulations (e.g., GDPR), and secure data export practices Experience with cloud platforms (preferably Azure), data pipelines, and telemetry systems Familiarity with Microsoft’s internal security and compliance frameworks (e.g., SDL, SFI) is a plus Excellent collaboration and communication skills, with a track record of working across cross-functional teams Preferred Qualifications: Experience working on enterprise SaaS products or learning platforms Proficiency in scripting or automation for security validation (e.g., PowerShell, Python) #DPG #EXP #Viva Responsibilities Implement and drive security improvements across Viva Learning’s data export and telemetry systems, ensuring compliance with Microsoft’s internal security standards and external regulatory requirements Collaborate with engineering and PM teams to address security consult feedback, including remediation of identified gaps and implementation of best practices Own the security review lifecycle for new features and infrastructure changes, including threat modeling, secure design reviews, and privacy assessments Develop and maintain secure data handling processes. Partner with stakeholders across engineering, compliance, and privacy to ensure timely delivery of SFI wave asks and audit readiness Contribute to the development of automation and tooling to streamline security validation and reporting Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 day ago
5.0 - 7.0 years
4 - 8 Lacs
Gurgaon
On-site
MongoDB's mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it's no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. MongoDB seeks an experienced Lead Engineer to level-up TechOps and launch a new Support and Development Team. The Support and Development team will work alongside the Internal Engineering department, building enterprise-grade software to enable coworkers to become more effective. As a Support and Development Lead Engineer, you'll be both a contributing member and manager of a team of Support and Development engineers. You'll be resolving complex technical issues escalated from Level 1 Support and actively contributing to the codebases of the platforms you support to fix those support issues. In addition, you'll be planning out the work for and giving guidance to the engineers on your team. We are looking to speak to candidates who are based in Gurugram for our hybrid working model. Our ideal candidate: Has 5-7 years of experience building software professionally Has 1-3 years managing software engineers Has experience with a modern programming language (python, go, rust, etc) Has excellent organizational and project management skills Cares deeply about coaching and developing teammates Is collaborative, detail-oriented, and passionate about developing usable software. Bonus Round: Experience working as a Support Agent Experience building full stack applications, from front end UIs to backend API handlers to DB migrations Knowledge of any of the following technologies: Next.js, FastAPI, React. Position Expectations: Manage a team of 2-3 Support And Development Engineers Facilitating planning, execution, and delivery of work for your team Balance engineering work with your managerial work Collaborate with other leads to effectively prioritize your team for maximum impact Write tooling to diagnose and solve technical issues more effectively Fix bugs and build features for supported platforms Be on call during a 12 hour shift (8am - 8pm IST) for the most critical support issues Success Measures: In three months you will have established positive relationships with your direct reports, giving them guidance on how to effectively get their jobs done and level up their performance. In three months you'll enable your team to close 75% of all escalated support issues. In six months, you'll grow that number to 90%. In six months, you'll be committing code to solve 75% of all actionable bugs. To drive the personal growth and business impact of our employees, we're committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees' wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it's like to work at MongoDB, and help us make an impact on the world! MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter. MongoDB is an equal opportunities employer. Requisition ID 2263178761
Posted 1 day ago
3.0 years
0 Lacs
India
Remote
🚀 We’re Hiring: Senior Software Engineer (Open Source + LLM Evaluation) 📍 Remote | Immediate Joiners Preferred Are you passionate about open-source software and the emerging world of LLMs (Large Language Models)? Do you thrive working with real-world, high-impact codebases? We're looking for someone like you! Join us as a Senior Software Engineer to lead hands-on evaluations of LLM capabilities across popular open-source projects (think: 5K+ stars on GitHub!). In this role, you'll triage issues, assess code quality, run and test production-grade projects, and help shape the future of AI-assisted software development. 🔧 What You'll Do: Dive deep into top-tier GitHub repositories and triage real-world issues Set up environments using Docker and modern tooling Evaluate test coverage, code quality, and LLM performance Collaborate with researchers to identify challenging codebases Optionally mentor junior engineers and lead technical efforts ✅ What We’re Looking For: 3+ years of software engineering experience Proven GitHub contributions to repos with 5,000+ stars Strong hands-on experience in any of: Python, Go, JavaScript, Rust, C/C++, Java, C#, Ruby Skilled in Git, Docker, and dev pipeline setup Comfortable navigating large, complex open-source codebases 🌟 Bonus Points: Experience with LLM evaluation or ML/AI research Worked on developer tools or automation agents Open to mentoring or leading a small team This is more than just a job — it's a chance to push the boundaries of AI + open-source from anywhere in the world. 📩 Apply now or tag someone who’d be a great fit! #SoftwareEngineering #OpenSourceJobs #LLM #RemoteWork #AIResearch #HiringNow #Python #GoLang #GitHub #DeveloperTools #MachineLearning #TechJobs
Posted 1 day ago
3.0 - 5.0 years
10 - 12 Lacs
Pune
Work from Office
Roles and Responsibilities Design, develop, test, and maintain scalable and secure Rust applications using Actix, Axum, and Tokio. Collaborate with cross-functional teams to identify requirements and implement solutions that meet business needs. Troubleshoot issues related to PostgreSQL databases and Redis caches.
Posted 1 day ago
3.0 years
0 Lacs
India
On-site
About SquareX SquareX is a fast-growing browser security startup that helps enterprises detect, mitigate, and hunt web-based threats against their users in real time. Our mission is to secure the internet for everyone, making our services invaluable to clients worldwide. We are looking for a dedicated and motivated Frontend Developer to join our engineering team and contribute to developing innovative product features. Responsibilities: Building SquareX’s browser extensions and web applications for various platforms with easy-to-use interface and light compute overhead. Building user and admin dashboards for various product interfaces. Apply technical knowledge and problem-solving skills to build innovative solutions for complex workflows Strive for constant improvement in terms of code quality, maintainability, performance Participate in, or lead design reviews with peers and stakeholders to decide amongst available technologies Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency) Ensure engineering best practices, including writing comprehensive test cases are followed Contribute to existing documentation or educational content and adapt content based on product/program updates and user feedback Triage product or system issues and debug/track/resolve them by analyzing the sources of issues and the impact on hardware, network, or service operations and quality. Support engineering operations, including being on-call for production support when necessary Collaborate effectively with the team, while being a good communicator (both verbal and written) Document and share important aspects of all engineering decisions being made Qualifications: Must have strong engineering skills and foundations, including problem-solving, coding, and debugging Must have expertise in core JavaScript with at least 3 years of experience developing with it Must be proficient in Rust, Typescript, HTML5, and CSS3 usage in building large-scale applications Must have experience in browser extension/plugin (Google Chrome, Mozilla Firefox) development Must have familiarity with browser extension security model and architecture Must have experience in creating a draggable and customizable flowchart for workflows using React Flow Must have expertise in Tailwind UI and integrating it with React Flow Must have experience in writing CI/CD pipelines for deploying web pages over AWS Cloud front and S3 Must have worked on building customizable UI user journeys where configuration and what to show is driven by APIs Must have worked on user access management with feature-level policy in enterprise dashboards Must have worked on i frame feature policies Must have expertise in CSP (Content Security Policy) Must have worked with placeholder replacement-based templating engines and generated reports with it Must know web application security risks and vulnerabilities Should be passionate about building rich and innovative user experience Cost to Company: 32 Lakhs to 1 Cr We thank all applicants for their interest, but only those selected for an interview will be contacted.
Posted 1 day ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us DATOMS is an IoT software platform that streamlines asset management and operations for equipment manufacturers, leasing and rental companies, and enterprises utilising machine learning, artificial intelligence, and the internet of things. Our scalable solution can be customised to meet the unique needs of each client and is trusted by top companies around the globe. We are looking for passionate problem solvers who are interested to create new technology from scratch. “Hardware is Hard”, but we believe dedication and craving for learning new things will help solving some of the biggest problems. The Embedded Firmware developer will work on Embedded Linux, Android and various embedded operating systems to write drivers which will simplify the data acquisition problem across various machines categories. The roles also demand fair understanding in dealing with various micro-processor and controllers. This is a full-time, on-site role located in Bengaluru. Responsibilities Develop, design, and implement embedded applications, drivers for various machine types and protocols. Design and Build testing cases and process for firmware. Prepares appropriate documentation as required by internal product development processes. Conducts and participates in design, code and test reviews and inspections, as well as the feasibility, efficacy, and compliance to functional and regulatory standards. Collaborates with distributed, cross-functional teams to ensure products meet quality, performance, scalability, reliability, and schedule goals. Conduct and participate in reviews and inspections for all elements in the firmware lifecycle to ensure that our code quality and customer satisfaction goals are achieved. Qualifications B.Tech / MTech (Computer Science, Electronics and Electrical Stream) More than 1-3 years of experience in firmware development or related field Skills Adequate knowledge of reading schematics and data sheets for components, ability to understand the electrical schematics and work closely with Embedded team. Basic knowledge of software life cycle, algorithm, and data structure. Coding experience in C, C++ is a must where as Experience in Python and Rust is a good to have. Excellent knowledge of RTOS, Embedded Linux or Android OS, Network Stack Hands-on Experience in working with various GSM/GPRS/4G, Wi-Fi, Ethernet mode of connections. Conceptual clarity on TCP, MQTT, HTTP protocols Hands-on Experience & Knowledge in interfaces Like UART, SPI, I2C, CAN, MODBUS, TCP/IP, USB, Bluetooth Experience in modules like Wi-Fi, BLE, Lora WAN, ZIGBEE, RF etc. Extensive experience in micro-controller/microprocessor (like ESP32, ARM Cortex M, STM Chip, Atmega chip sets etc.) Familiarity with software configuration management tools, debugging and peer review tools(GIT, SVN) History of driving project execution and timely delivery while ensuring a quality focus. Know how in writing / interfacing with device drivers. Knowledge in Agile development processes and philosophies. Strong documentation and communication skills to effectively collaborate with other members in the team. Knowledge & know-how to use generative AI tools in day-to-day activities to streamline tasks. Skills: arm cortex,embedded systems,modbus,svn,rust,mqtt,android os,embedded c,http,wi-fi,tcp,esp32,microcontroller,microprocessor,usb,python,tcp/ip,uart,rtos,stm chip,firmware testing,ethernet,bluetooth,microcontrollers,4g,i2c,spi,lora wan,embedded linux,device drivers,zigbee,agile development,gprs,c++,git,wi-fi modules,gsm,rf,arm cortex m,can,c,ble,atmega chip sets,firmware,embedded
Posted 1 day ago
10.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category Engineering Experience Manager Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Lead Software Engineer, Data Management - Capital One Software Ever since our first credit card customer in 1994, Capital One has recognized that technology and data can enable even large companies to be innovative and personalized. As one of the first large enterprises to go all-in on the public cloud, Capital One needed to build cloud and data management tools that didn’t exist in the marketplace to enable us to operate at scale in the cloud. And in 2022, we publicly announced Capital One Software and brought our first B2B software solution, Slingshot, to market. Building on Capital One’s pioneering adoption of modern cloud and data capabilities, Capital One Software is helping accelerate the data management journey at scale for businesses operating in the cloud. If you think of the kind of challenges that companies face – things like data publishing, data consumption, data governance, and infrastructure management – we’ve built tools to address these various needs along the way. Capital One Software will continue to explore where we can bring our solutions to market to help other businesses address these same needs going forward. We are seeking top tier talent to join our pioneering team and propel us towards our destination. You will be joining a team of innovative product, tech, and design leaders that tirelessly seek to question the status quo. As a Lead Software Engineer, you’ll have the opportunity to be on the forefront of building this business and bring these tools to market. As a Lead Software Engineer - Data Management, you will: Help build innovative products and solutions for problems in the Data Management domain Maintain knowledge on industry innovations, trends and practices to curate a continual stream of incubated projects and create rapid product prototypes Participate in technology events to support brand awareness of the organization and to attract top talent. Basic Qualifications Bachelor's Degree in Computer Science or a related field At least 8 years of professional software development experience (internship experience does not apply) At least 3 years of experience in building software solutions to problems in one of the Data Management areas listed below: Data Catalog / Metadata Store Access Control / Policy Enforcement Data Governance Data Lineage Data Monitoring and Alerting Data Scanning and Protection At least 3 years of experience in building software using at least 1 of the following: Golang, Java, Python, Rust, C++ At least 3 years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) Preferred Qualifications Master's Degree in Computer Science or a related field At least 10 years of professional software development experience (internship experience does not apply) Experience in building a commercial Data Management product from the ground up Experience in supporting a commercial Data Management product in cloud with Enterprise clients At this time, Capital One will not sponsor a new applicant for employment authorization for this position . No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana
On-site
Security Software Engineer II Hyderabad, Telangana, India Date posted Jun 26, 2025 Job number 1835462 Work site Microsoft on-site only Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Viva Learning is on a mission to empower employees with personalized, integrated learning experiences. As part of our continued investment in secure and scalable learning solutions, we are seeking a Software Engineer II to join our team. This role will focus on strengthening our security posture across data pipelines, telemetry systems, and compliance workflows, especially in response to evolving SFI (Security Future Initiative) requirements and internal security reviews. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: 5+ years of experience in identifying security vulnerabilities, software development lifecycle, large-scale computing, modeling, cyber security, and anomaly detection 5+ years of experience with coding or scripting in languages such as C#, Python, C++, Go, PowerShell, .NET, Rust, or other comparable programming languages Strong understanding of identity and access management concepts, including OAuth, Entra applications, authentication and authorization flows, and service principal configurations Good understanding of secure software development practices, including threat modeling, secure coding, and vulnerability remediation. Knowledge of data governance, privacy regulations (e.g., GDPR), and secure data export practices Experience with cloud platforms (preferably Azure), data pipelines, and telemetry systems Familiarity with Microsoft’s internal security and compliance frameworks (e.g., SDL, SFI) is a plus Excellent collaboration and communication skills, with a track record of working across cross-functional teams Preferred Qualifications: Experience working on enterprise SaaS products or learning platforms Proficiency in scripting or automation for security validation (e.g., PowerShell, Python) #DPG #EXP #Viva Responsibilities Implement and drive security improvements across Viva Learning’s data export and telemetry systems, ensuring compliance with Microsoft’s internal security standards and external regulatory requirements Collaborate with engineering and PM teams to address security consult feedback, including remediation of identified gaps and implementation of best practices Own the security review lifecycle for new features and infrastructure changes, including threat modeling, secure design reviews, and privacy assessments Develop and maintain secure data handling processes. Partner with stakeholders across engineering, compliance, and privacy to ensure timely delivery of SFI wave asks and audit readiness Contribute to the development of automation and tooling to streamline security validation and reporting Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 day ago
0.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category Engineering Experience Manager Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Lead Software Engineer, Data Management - Capital One Software Ever since our first credit card customer in 1994, Capital One has recognized that technology and data can enable even large companies to be innovative and personalized. As one of the first large enterprises to go all-in on the public cloud, Capital One needed to build cloud and data management tools that didn’t exist in the marketplace to enable us to operate at scale in the cloud. And in 2022, we publicly announced Capital One Software and brought our first B2B software solution, Slingshot, to market. Building on Capital One’s pioneering adoption of modern cloud and data capabilities, Capital One Software is helping accelerate the data management journey at scale for businesses operating in the cloud. If you think of the kind of challenges that companies face – things like data publishing, data consumption, data governance, and infrastructure management – we’ve built tools to address these various needs along the way. Capital One Software will continue to explore where we can bring our solutions to market to help other businesses address these same needs going forward. We are seeking top tier talent to join our pioneering team and propel us towards our destination. You will be joining a team of innovative product, tech, and design leaders that tirelessly seek to question the status quo. As a Lead Software Engineer, you’ll have the opportunity to be on the forefront of building this business and bring these tools to market. As a Lead Software Engineer - Data Management, you will: Help build innovative products and solutions for problems in the Data Management domain Maintain knowledge on industry innovations, trends and practices to curate a continual stream of incubated projects and create rapid product prototypes Participate in technology events to support brand awareness of the organization and to attract top talent. Basic Qualifications Bachelor's Degree in Computer Science or a related field Atleast 8 years of professional software development experience (internship experience does not apply) Atleast 3 years of experience in building software solutions to problems in one of the Data Management areas listed below: Data Catalog / Metadata Store Access Control / Policy Enforcement Data Governance Data Lineage Data Monitoring and Alerting Data Scanning and Protection Atleast 3 years of experience in building software using at least 1 of the following: Golang, Java, Python, Rust, C++ Atleast 3 years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) Preferred Qualifications Master's Degree in Computer Science or a related field Atleast 10 years of professional software development experience (internship experience does not apply) Experience in building a commercial Data Management product from the ground up Experience in supporting a commercial Data Management product in cloud with Enterprise client. At this time, Capital One will not sponsor a new applicant for employment authorization for this position . No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane