Jobs
Interviews

4817 Latency Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Software Engineer Senior Location: Chennai Work Type: Hybrid Position Description: As part of the client's DP&E Platform Observability team, you'll help build a top-tier monitoring platform focused on latency, traffic, errors, and saturation. You'll design, develop, and maintain a scalable, reliable platform, improving MTTR/MTTX, creating dashboards, and optimizing costs. Experience with large systems, monitoring tools (Prometheus, Grafana, etc.), and cloud platforms (AWS, Azure, GCP) is ideal. The focus is a centralized observability source for data-driven decisions and faster incident response. Skills Required: Spring Boot, Angular, Cloud Computing Skills Preferred: Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API Experience Required: 5+ years of overall experience with proficiency in Java, angular or any javascript technology with experience in designing and deploying cloud-based data pipelines and microservices using GCP tools like BigQuery, Dataflow, and Dataproc. Ability to leverage best in-class data platform technologies (Apache Beam, Kafka,...) to deliver platform features, and design & orchestrate platform services to deliver data platform capabilities. Service-Oriented Architecture and Microservices: Strong understanding of SOA, microservices, and their application within a cloud data platform context. Develop robust, scalable services using Java Spring Boot, Python, Angular, and GCP technologies. Full-Stack Development: Knowledge of front-end and back-end technologies, enabling collaboration on data access and visualization layers (e.g., React, Node.js). Design and develop RESTful APIs for seamless integration across platform services. Implement robust unit and functional tests to maintain high standards of test coverage and quality. Database Management: Experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases, as well as columnar databases like BigQuery. Data Governance and Security: Understanding of data governance frameworks and implementing RBAC, encryption, and data masking in cloud environments. CI/CD and Automation: Familiarity with CI/CD pipelines, Infrastructure as Code (IaC) tools like Terraform, and automation frameworks. Manage code changes with GitHub and troubleshoot and resolve application defects efficiently. Ensure adherence to SDLC best practices, independently managing feature design, coding, testing, and production releases. Problem-Solving: Strong analytical skills with the ability to troubleshoot complex data platform and microservices issues. Experience Preferred: GCP Data Engineer, GCP Professional Cloud Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 3 days ago

Apply

3.0 years

0 Lacs

India

On-site

Rust Developer (Scalable Systems) Experience: 3+ years Location: Ahmedabad, Gujarat Employment Type: Full-time Key Responsibilities: Design, develop, and optimize high-performance backend services using Rust , targeting 1000+ orders per second throughput. Implement scalable architectures with load balancing for high availability and minimal latency. Integrate and optimize Redis for caching, pub/sub, and data persistence. Work with messaging services like Kafka and RabbitMQ to ensure reliable, fault-tolerant communication between microservices. Develop and manage real-time systems with WebSockets for bidirectional communication. Write clean, efficient, and well-documented code with unit and integration tests. Collaborate with DevOps for horizontal scaling and efficient resource utilization. Diagnose performance bottlenecks and apply optimizations at the code, database, and network level. Ensure system reliability, fault tolerance, and high availability under heavy loads. Required Skills & Experience: 3+ years of professional experience with Rust in production-grade systems. Strong expertise in Redis (clustering, pipelines, Lua scripting, performance tuning). Proven experience with Kafka , RabbitMQ , or similar messaging queues. Deep understanding of load balancing, horizontal scaling , and distributed architectures. Experience with real-time data streaming and WebSocket implementations. Knowledge of system-level optimizations, memory management, and concurrency in Rust. Familiarity with high-throughput, low-latency systems and profiling tools. Understanding of cloud-native architectures (AWS, GCP, or Azure) and containerization (Docker/Kubernetes). Preferred Qualifications: Experience with microservices architecture and service discovery . Knowledge of monitoring & logging tools (Prometheus, Grafana, ELK). Exposure to CI/CD pipelines for Rust-based projects. Experience in security and fault-tolerant design for financial or trading platforms (nice to have). Job Types: Full-time, Permanent Experience: Rust Developer: 1 year (Required) Work Location: In person

Posted 3 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Hiring for .NET Developer for product based company Job Requirements: Bachelor’s Degree in Computer Science or experience through higher education 5+ years of experience managing Windows Server based infrastructure Experience in large scale server system operations Diagnostic and troubleshooting skills (.NET/CORE, SQL Server, Angular) Demonstrable skills in utilizing scripting languages like PowerShell 5+ years of .NET Windows Services, Web Service, Website development and design expertise for high-volume, low-latency processing software utilizing C# and .NET CORE. Experience with the software development lifecycle, continuous integration/continuous delivery and full understanding of the software repository and deployment tools Azure Dev Ops and Azure Pipelines Hands on experience working with SQL Server 2016+ Experience using Jira

Posted 3 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Zeta Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015. Our flagship processing platform - Zeta Tachyon - is the industry’s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 15M+ cards have been issued on our platform globally. Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over 1700+ employees - with over 70% roles in R&D - across locations in the US , EMEA , and Asia . We raised $280 million at a $1.5 billion valuation from Softbank, Mastercard, and other investors in 2021. Learn more @ www.zeta.tech , careers.zeta.tech , Linkedin , Twitter About The Role We are currently seeking an accomplished Senior Backend Engineer to join our dynamic team. In this role, you will lead the development of cutting-edge and user-centric applications. Your expertise will play a pivotal role in shaping backend architecture, guiding junior developers, and delivering high-quality, performant code. Lambda team builds large-scale transaction processing systems that can work with many current and future payment networks. We build applications that help banks realize the value of this new approach early. We help banks to rapidly deliver the value of these applications to their customers. Responsibilities Building highly-scalable and secure payments platform Primary owners of one or more components of the platform and will drive innovation in your area of ownership Working with various product teams gathering eequirements and adding capabilities Using cutting-edge cryptography to secure payments beyond industry standards. Deriving actionable insights by mining TBs of data. Building low-level infrastructure that aims to push the boundaries of network performance. Review and influence new evolving design, architecture, standards and methods with stability, maintainability and scale in mind Identify patterns and provide solutions to class of problems Research, evaluate and socialize new tools, technologies, and techniques to improve the value of the system Experience And Qualifications Bachelor’s/Master’s degree in engineering (computer science, information systems) with 4-7 years of experience building enterprise systems Worked on one or more large scale java applications Good understanding of nuances of distributed systems, scalability, and availability Good knowledge of one or more relational and NoSQL databases and transactions Shrewd focus on latency and throughput of services In-depth understanding of concurrency, synchronization, NIO, memory allocation and GC Experience with IaaS clouds like AWS/Google Cloud, Azure, OpenStack etc. Experience in working with Message Brokers and Application Containers Great ability to mentor and train other team members Skills Strong experience in Java (large-scale applications) Deep understanding of distributed systems, scalability, and availability Experience with low-level infrastructure and network performance optimization Focus on latency, throughput, concurrency, synchronization, NIO, memory allocation, and Garbage Collection (GC) Proficiency with both Relational and NoSQL databases Hands-on experience with IaaS cloud providers such as AWS, Google Cloud, Azure, or OpenStack Experience with Message Brokers (e.g., Kafka, RabbitMQ) Life At Zeta At Zeta, we want you to grow to be the best version of yourself by unlocking the great potential that lies within you. This is why our core philosophy is ‘People Must Grow.’ We recognize your aspirations; act as enablers by bringing you the right opportunities, and let you grow as you chase disruptive goals. is adventurous and exhilarating at the same time. You get to work with some of the best minds in the industry and experience a culture that values the diversity of thoughts. If you want to push boundaries, learn continuously and grow to be the best version of yourself, Zeta is the place to be! Explore the life at zeta Zeta is an equal opportunity employer. At Zeta, we are committed to equal employment opportunities regardless of job history, disability, gender identity, religion, race, marital/parental status, or another special status. We are proud to be an equitable workplace that welcomes individuals from all walks of life if they fit the roles and responsibilities.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the databases team builds and maintains Microsoft's operational Database systems. We store and manage data in a structured way to enable multitude of applications across various industries. We are on a journey to enable developer friendly, mission-critical, AI enabled operational Databases across relational, non-relational and OSS offerings. Azure Cosmos DB is one of the fastest growing Azure services that provides globally distributed, low-latency, massively scalable, multi-model cloud database service. It is designed to enable developers to build planet-scale applications. We are hiring a Senior Software Engineer to join the Azure Cosmos DB team, where you will be working on a large-scale distributed operational database. In this role, you will work on distributed systems problems and technologies to help determine the future of our planet scale database. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Responsibilities Design, implement and ship distributed database management system offerings effectively providing customer value in terms of security, performance, reliability, usability and manageability while ensuring business goals are met Collaborate effectively with the team, make appropriate systems tradeoffs in design and implementation, and ensure customer success in their use of the product Embody our culture and values Qualifications Required/Minimum Qualifications Bachelor's degree in computer science/Engineering/related fields or equivalent industry experience 7+ years of software development experience in building and shipping production software or services with code in languages such as C++, C# or similar Good communications skills, both verbal and written Experience working with large scale systems / distributed systems and in implementing database management systems preferred Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter #azdat #azuredata #cosmosdb Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Requirements Description and Requirements Position Summary We are seeking a forward-thinking and enthusiastic Engineering and Operations Specialist to manage and optimize our MongoDB and Splunk platforms. The ideal candidate will have in-depth experience in at least one of these technologies, with a preference for experience in both. Job Responsibilities Worked with engineering and operational tasks for MongoDB and Splunk platforms, ensuring high availability and stability. Continuously improve the stability of the environments, leveraging automation, self-healing mechanisms, and AIOps. Develop and implement automation using technologies such as Ansible, Python, Shell. Manage CI/CD deployments and maintain code repositories. Utilize Infrastructure/Configuration as Code practices to streamline processes. Work closely with development teams to integrate database and observability/logging tools effectively Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex MongoDB databases version (6.0,7.0 ,8.0 and above) on Linux OS on (on-premises, cloud-based). Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and implement best Database and infrastructure security to meet the compliance. Monitor and tune MongoDB and Splunk clusters for optimal performance, identifying bottlenecks and troubleshooting issues. Analyze database queries, indexing, and storage to ensure minimal latency and maximum throughput. The Senior Splunk System Administrator will build, maintain, and standardize the Splunk platform, including forwarder deployment, configuration, dashboards, and maintenance across Linux OS . Able to debug production issues by analyzing the logs directly and using tools like Splunk. Work in Agile model with the understanding of Agile concepts and Azure DevOps. Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. MongoDB Certified DBA or Splunk Certified Administrator is a plus Experience with cloud platforms like AWS, Azure, or Google Cloud. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in MongoDB and working experience Splunk Administrator Technical Skills In-depth experience with either MongoDB or Splunk, with a preference for exposure to both. Strong enthusiasm for learning and adopting new technologies. Experience with automation tools like Ansible, Python and Shell. Proficiency in CI/CD deployments, DevOps practices, and managing code repositories. Knowledge of Infrastructure/Configuration as Code principles. Developer experience is highly desired. Data engineering skills are a plus. Experience with other DB technologies and observability tools are a plus. Extensive work experience Managed and optimized MongoDB databases, designed robust schemas, and implemented security best practices, ensuring high availability, data integrity, and performance for mission-critical applications. Working experience in database performance tuning with MongoDB tools and techniques. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Extensive experience in Database Backup and recovery strategy by design, configuration and implementation using backup tools (Mongo dump, Mongo restore) and Rubrik. Extensive experience in Configuration and enforced SSL/TLS encryption for secure communication between MongoDB nodes Working experience to Configure and maintain Splunk environments, developed dashboards, and implemented log management solutions to enhance system monitoring and security across Linux OS. Experience Splunk migration and upgradation on Standalone Linux OS and Cloud platform is plus. Perform application administration for a single security information management system using Splunk. Working knowledge of Splunk Search Processing Language (SPL), architecture and various components (indexer, forwarder, search head, deployment server) Extensive experience in both MongoDB database and Splunk replication between Primary and Secondary servers to ensure high availability and fault tolerance. Managed Infrastructure security policy as per best industry standard by designing, configurating and implementing privileges and policy on database using RBAC as well as Splunk. Scripting skills and automation experience using DevOps, Repos and Infrastructure as code. Working experience in Container (AKS and OpenShift) is plus. Working experience in Cloud Platform experience (Azure, Cosmos DB) is plus. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Strong problem-solving abilities and proactive approach to identifying and resolving issues. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities effectively. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!

Posted 3 days ago

Apply

5.0 years

0 Lacs

Roorkee, Uttarakhand, India

Remote

Company Description Miratech helps visionaries change the world. We are a global IT services and consulting company that brings together enterprise and start-up innovation. Today, we support digital transformation for some of the world's largest enterprises. By partnering with both large and small players, we stay at the leading edge of technology, remain nimble even as a global leader, and create technology that helps our clients further enhance their business. We are a values-driven organization and our culture of Relentless Performance has enabled over 99% of Miratech's engagements to succeed by meeting or exceeding our scope, schedule, and/or budget objectives since our inception in 1989. Miratech has coverage across 5 continents and operates in over 25 countries around the world. Miratech retains nearly 1000 full-time professionals, and our annual growth rate exceeds 25%. Job Description Join us in revolutionizing customer experiences with our client, a global leader in cloud contact center software. We bring the power of cloud innovation to enterprises worldwide, empowering businesses to deliver seamless, personalized, and joyful customer interactions. About the Team: The Voice Client team plays a critical role in delivering high-quality voice call capabilities for advanced call and contact center solutions. We are dedicated to providing seamless connectivity, crystal-clear audio, and superior reliability to enhance customer engagement. Our work directly influences the experience of millions of users globally, making voice communication smarter, faster, and more resilient. About the Role: We are seeking a Senior Telecom Engineer to who will play a vital role in delivering high-quality voice call capabilities for advanced call and contact center solutions. In this position, you will serve as the technical expert responsible for designing, deploying, and maintaining our clients’ global voice infrastructure. The ideal candidate will have deep expertise in VoIP/SIP technologies, telecom carrier connectivity, and Ribbon (Sonus) SBCs, along with strong hands-on experience in troubleshooting complex network issues.our work will directly impact the experience of millions of users worldwide, making voice communication smarter, faster, and more resilient. Responsibilities: Collaborate with customer telecom and IT teams to deploy customized solutions and troubleshoot issues. Build and maintain SIP Trunk connectivity with customers and carriers, including interop sessions and activations. Provide operational support for the telecom network, analyze incidents, and implement preventive measures. Serve as an escalation point for critical alerts from SBCs and deliver root-cause analyses for outages. Manage telecom service providers and vendors, and oversee hardware/software deployments and new service rollouts. Develop testing plans, create technical documentation, and maintain SOPs for recurring tasks. Mentor junior engineers in troubleshooting and managing complex service issues. Understand product capabilities, limitations, and contribute to continuous improvements. Qualifications 5+ years of telecom engineer experience with VoIP/SIP voice applications. Strong knowledge of voice/data communications (SIP, TCP/IP, MPLS), VoIP protocols (H.248, G.711, G.729, WebRTC), and security (TLS, IPSEC, ACLs). High level knowledge of VoIP principles, protocols and CODECs such as H.248, SIP, G.711, G.729, WebRTC, MPLS, VPN, UDP, RTP, MTP etc. Experience with international routing (ITFS, iDID) and telephony design for high availability (99.99% SLA). Hands-on experience with Softswitches, SBCs (Sonus/Ribbon, AudioCodes), SIP proxies, and media servers (AudioCodes IPM-6310, FreeSWITCH). Skilled in Wireshark, Empirix, and RTP stream analysis (MOS, Jitter, Latency, Packet Loss). Ability to design, troubleshoot, and maintain complex global voice networks. Strong organizational skills and experience implementing telecom architecture changes in lab and production. We offer: Culture of Relentless Performance: join an unstoppable technology development team with a 99% project success rate and more than 30% year-over-year revenue growth. Competitive Pay and Benefits: enjoy a comprehensive compensation and benefits package, including health insurance, language courses, and a relocation program. Work From Anywhere Culture: make the most of the flexibility that comes with remote work. Growth Mindset: reap the benefits of a range of professional development opportunities, including certification programs, mentorship and talent investment programs, internal mobility and internship opportunities. Global Impact: collaborate on impactful projects for top global clients and shape the future of industries. Welcoming Multicultural Environment: be a part of a dynamic, global team and thrive in an inclusive and supportive work environment with open communication and regular team-building company social events. Social Sustainability Values: join our sustainable business practices focused on five pillars, including IT education, community empowerment, fair operating practices, environmental sustainability, and gender equality. Miratech is an equal opportunity employer and does not discriminate against any employee or applicant for employment on the basis of race, color, religion, sex, national origin, age, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable law.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About Zeta Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015. Our flagship processing platform - Zeta Tachyon - is the industry’s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 15M+ cards have been issued on our platform globally. Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over 1700+ employees - with over 70% roles in R&D - across locations in the US , EMEA , and Asia . We raised $280 million at a $1.5 billion valuation from Softbank, Mastercard, and other investors in 2021. Learn more @ www.zeta.tech , careers.zeta.tech , Linkedin , Twitter About The Role We are currently seeking an accomplished Senior Backend Engineer to join our dynamic team. In this role, you will lead the development of cutting-edge and user-centric applications. Your expertise will play a pivotal role in shaping backend architecture, guiding junior developers, and delivering high-quality, performant code. Lambda team builds large-scale transaction processing systems that can work with many current and future payment networks. We build applications that help banks realize the value of this new approach early. We help banks to rapidly deliver the value of these applications to their customers. Responsibilities Building highly-scalable and secure payments platform Primary owners of one or more components of the platform and will drive innovation in your area of ownership Working with various product teams gathering eequirements and adding capabilities Using cutting-edge cryptography to secure payments beyond industry standards. Deriving actionable insights by mining TBs of data. Building low-level infrastructure that aims to push the boundaries of network performance. Review and influence new evolving design, architecture, standards and methods with stability, maintainability and scale in mind Identify patterns and provide solutions to class of problems Research, evaluate and socialize new tools, technologies, and techniques to improve the value of the system Experience And Qualifications Bachelor’s/Master’s degree in engineering (computer science, information systems) with 4-7 years of experience building enterprise systems Worked on one or more large scale java applications Good understanding of nuances of distributed systems, scalability, and availability Good knowledge of one or more relational and NoSQL databases and transactions Shrewd focus on latency and throughput of services In-depth understanding of concurrency, synchronization, NIO, memory allocation and GC Experience with IaaS clouds like AWS/Google Cloud, Azure, OpenStack etc. Experience in working with Message Brokers and Application Containers Great ability to mentor and train other team members Skills Strong experience in Java (large-scale applications) Deep understanding of distributed systems, scalability, and availability Experience with low-level infrastructure and network performance optimization Focus on latency, throughput, concurrency, synchronization, NIO, memory allocation, and Garbage Collection (GC) Proficiency with both Relational and NoSQL databases Hands-on experience with IaaS cloud providers such as AWS, Google Cloud, Azure, or OpenStack Experience with Message Brokers (e.g., Kafka, RabbitMQ) Life At Zeta At Zeta, we want you to grow to be the best version of yourself by unlocking the great potential that lies within you. This is why our core philosophy is ‘People Must Grow.’ We recognize your aspirations; act as enablers by bringing you the right opportunities, and let you grow as you chase disruptive goals. is adventurous and exhilarating at the same time. You get to work with some of the best minds in the industry and experience a culture that values the diversity of thoughts. If you want to push boundaries, learn continuously and grow to be the best version of yourself, Zeta is the place to be! Explore the life at zeta Zeta is an equal opportunity employer. At Zeta, we are committed to equal employment opportunities regardless of job history, disability, gender identity, religion, race, marital/parental status, or another special status. We are proud to be an equitable workplace that welcomes individuals from all walks of life if they fit the roles and responsibilities.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Position Overview We are seeking an experienced Python Backend Engineer to join our team in building high-performance, scalable backend systems for algorithmic trading. The ideal candidate will have strong expertise in developing exchange integrations, optimizing order management systems, and ensuring low-latency execution. Experience with FastAPI and Golang is a strong plus. Responsibilities Design and develop scalable backend systems for real-time trading applications. Build and optimize order management systems with smart order routing capabilities. Integrate multiple exchange APIs (REST, WebSockets, FIX protocol) for seamless connectivity. Develop high-performance execution engines with low-latency trade execution using Python and optionally Golang. Develop APIs using FastAPI for efficient and fast backend service delivery. Implement real-time monitoring, logging, and alerting systems to ensure reliability. Design fault-tolerant and distributed architectures for handling large-scale transactions. Work on message queues (RabbitMQ, Kafka) for efficient data processing. Ensure system security and compliance with financial industry standards. Collaborate with quant researchers and business teams to implement trading logic. Required Technical Skills Strong proficiency in Python (4+ years) with a focus on backend development. Experience building APIs using FastAPI. Working knowledge of Golang is a plus, especially in high-performance and concurrent applications. Expertise in API development and integration using REST, WebSockets, and FIX protocol. Experience with asynchronous programming (asyncio, aiohttp) for high-concurrency applications. Strong knowledge of database systems (MySQL, PostgreSQL, MongoDB, Redis, time-series databases). Proficiency in containerization and orchestration (Docker, Kubernetes, AWS). Experience with message queues (RabbitMQ, Kafka) for real-time data processing. Knowledge of monitoring tools (Prometheus, Grafana, ELK Stack) for system observability. Experience with scalable system design, microservices, and distributed architectures. Location - Mumbai, India Timings - Monday–Friday (11:00 am to 7:00 pm) Skills: rest,mysql,redis,fix protocol,grafana,docker,rabbitmq,mongodb,fastapi,aws,golang,kubernetes,prometheus,websockets,elk stack,asynchronous programming,kafka,python,go (golang),realtime programming,postgresql

Posted 3 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Corporate Treasury lies at the heart of Goldman Sachs, ensuring all the businesses have the appropriate level of funding to conduct their activities, while also optimizing the firm’s liquidity, managing its risk and compliance with regulations. Our Corporate Treasury Engineering team is a world leader in developing quantitative techniques and technological solutions that solve complex and commercial business problems. We partner with our firm’s treasurer and other members of Corporate Treasury senior leadership to manage the firm’s liquidity risk, secured and unsecured funding programs, and the level and composition of consolidated and subsidiary equity capital and to invest any excess liquidity. An exciting confluence of computer science, finance and mathematics are being used to solve for what our shareholders would like from us – a high return for the right risk taken. Your Impact In this role, you will be provided unique insight into the firm’s business activities and asset strategy. You will be responsible for defining, developing software’s to analyze data, built metric calculators, automated tools to help business get insights into data, predict scenarios, and perform better decision making to reduce interest expenses for the firm. This front to back model gives software developer’s window into all aspects of CT planning and execution while working on cutting edge industrial technologies. In this role, you will be contributing to design, develop and support digitally advanced financial products by collaborating with globally located cross functional teams. You will drive initiatives to analyze existing software implementations to identify areas of improvement, participate in prioritization of these improvements and provide estimates for implementing new features. Contribute to building team processes and best practices. Basic Qualifications B.E or B.Tech or higher in Computer Science (or equivalent work experience) 1+ years of relevant professional experience Experience in software development, including a clear understanding of data structures, algorithms, software design and core programming concepts. Strong analytical and problem-solving skills – demonstrated ability to learn technologies and apply. Comfortable multi-tasking, managing multiple stakeholders and working as part of a team. Excellent communication skills including experience speaking to technical and business audiences and working globally. Can apply an entrepreneurial approach and passion to problem solving and product development. Preferred Qualifications Strong programming experience in at least one compiled language - Java, Python Experience in designing highly scalable, efficient systems. Web technology design experience is plus. Experience with micro-service architecture, Spring, messaging queues like Kafka Experience with Kubernetes Experience with AWS Familiarity with financial markets, financial assets, and liquidity management is plus About Goldman Sachs Goldman Sachs Engineering Culture At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets Engineering is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here!

Posted 3 days ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Description – AI Developer (Agentic AI Frameworks, Computer Vision & LLMs) Location (Hybrid - Bangalore) About the Role We’re seeking an AI Developer who specializes in agentic AI frameworks —LangChain, LangGraph, CrewAI, or equivalents—and who can take both vision and language models from prototype to production. You will lead the design of multi‑agent systems that coordinate perception (image classification & extraction), reasoning, and action, while owning the end‑to‑end deep‑learning life‑cycle (training, scaling, deployment, and monitoring). Key Responsibilities Scope What You’ll Do Agentic AI Frameworks (Primary Focus) Architect and implement multi‑agent workflows using LangChain, LangGraph, CrewAI, or similar. Design role hierarchies, state graphs, and tool integrations that enable autonomous data processing, decision‑making, and orchestration. Benchmark and optimize agent performance (cost, latency, reliability). Image Classification & Extraction Build and fine‑tune CNN/ViT models for classification, detection, OCR, and structured data extraction. Create scalable data‑ingestion, labeling, and augmentation pipelines. LLM Fine‑Tuning & Retrieval‑Augmented Generation (RAG) Fine‑tune open‑weight LLMs with LoRA/QLoRA, PEFT; perform SFT, DPO, or RLHF as needed. Implement RAG pipelines using vector databases (FAISS, Weaviate, pgvector) and domain‑specific adapters. Deep Learning at Scale Develop reproducible training workflows in PyTorch/TensorFlow with experiment tracking (MLflow, W&B). Serve models via TorchServe/Triton/KServe on Kubernetes, SageMaker, or GCP Vertex AI. MLOps & Production Excellence Build robust APIs/micro‑services (FastAPI, gRPC). Establish CI/CD, monitoring (Prometheus, Grafana), and automated retraining triggers. Optimize inference on CPU/GPU/Edge with ONNX/TensorRT, quantization, and pruning. Collaboration & Mentorship Translate product requirements into scalable AI services. Mentor junior engineers, conduct code and experiment reviews, and evangelize best practices. Minimum Qualifications B.S./M.S. in Computer Science, Electrical Engineering, Applied Math, or related discipline. 5+ years building production ML/DL systems with strong Python & Git . Demonstrable expertise in at least one agentic AI framework (LangChain, LangGraph, CrewAI, or comparable). Proven delivery of computer‑vision models for image classification/extraction. Hands‑on experience fine‑tuning LLMs and deploying RAG solutions. Solid understanding of containerization (Docker) and cloud AI stacks (AWS/Azure). Knowledge of distributed training, GPU acceleration, and performance optimization. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Job Type: Full-time Pay: Up to ₹1,200,000.00 per year Experience: AI, LLM, RAG: 4 years (Preferred) Vector database, Image classification: 4 years (Preferred) containerization (Docker): 3 years (Preferred) ML/DL systems with strong Python & Git: 3 years (Preferred) LangChain, LangGraph, CrewAI: 3 years (Preferred) Location: Bangalore, Karnataka (Preferred) Work Location: In person

Posted 3 days ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Title: Solutions Architect – Agentic AI Systems & Scalable Platforms Experience : 15+ years Location : Delhi NCR, Bangalore, Pune (Hybrid) Job Summary: We are looking for a highly experienced Solutions Architect (15+ years) to lead the design and implementation of scalable, event-driven AI/ML platforms. This role will focus on building distributed systems, integrating multi-model AI orchestration, ensuring observability, and securing data operations across hybrid cloud environments. The ideal candidate combines deep technical acumen with excellent communication skills, capable of engaging with executive leadership and leading cross-functional engineering teams. Must Have Skills: 15+ years of experience in architecture and software engineering, with deep expertise in distributed systems Core Competencies Distributed System Design: Proven leadership in architecting resilient, scalable platforms for high-concurrency agent orchestration and state management AI/ML System Integration: Strong experience designing AI/ML integration layers with support for multi-model orchestration, fallback strategies, and cost optimization Event-Driven Orchestration: Expertise in implementing event-driven orchestration workflows, including human-in-the-loop decision points and rollback mechanisms Observability Architecture: Hands-on with observability architecture, including monitoring, tracing, debugging, and telemetry for AI systems Security-First Design: In-depth knowledge of zero-trust security architectures, with RBAC/ABAC and fine-grained access control for sensitive operations Technical Proficiencies Programming: Python (async frameworks), TypeScript/JavaScript (modern frameworks), Go Container Orchestration: Kubernetes, service mesh architectures, serverless patterns Real-time Systems: WebSocket protocols, event streaming, low-latency architectures Infrastructure Automation: GitOps, infrastructure as code, automated scaling policies Performance Engineering: Distributed caching, query optimization, resource pooling Platform Integration Skills API Gateway Design: Rate limiting, authentication, multi-provider abstraction Workflow Orchestration: State machines, saga patterns, compensating transactions Frontend Architecture: Micro-frontends, real-time collaboration features, responsive data visualization Persistence Strategies: Polyglot persistence, CQRS patterns, event sourcing Track record of effective collaboration with AI/ML engineers, Data Engineers, Backend Developers, and UI/UX teams on complex platform delivery, a lead by doing attitude towards resolving issues and technical roadblocks. Demonstrated ability to produce architecture diagrams and maintain technical documentation standards Excellent communication and stakeholder management, especially with senior and executive leadership Nice to Have Skills: Experience with real-time systems (WebSockets, event streaming, low-latency protocols) Exposure to polyglot persistence, event sourcing, CQRS patterns Experience with multi-tenant SaaS platforms and usage-based billing models Knowledge of hybrid cloud deployments and cost attribution for AI compute workloads Familiarity with compliance frameworks, audit trail design, and encryption strategies Exposure to frontend architectures like micro-frontends and real-time dashboards Experience with infrastructure as code (IaC) and performance tuning for distributed caching Role & Responsibilities: Architect and lead development of scalable, distributed agent orchestration systems Design abstraction layers for multi-model AI integration with efficiency and fallback logic Develop event-driven workflows with human oversight, compensating transactions, and rollback paths Define observability architecture, including logging, tracing, metrics, and debugging for AI workflows Implement zero-trust and fine-grained security controls for sensitive data operations Create and maintain technical artifacts, including architecture diagrams, standards, and design patterns Act as technical liaison between cross-functional teams and executive stakeholders Guide engineering teams through complex solutioning, issue resolution, and performance optimization Drive documentation standards and ensure architectural alignment across the delivery lifecycle Key Skills: Distributed systems, AI orchestration, Event-driven workflows, Kubernetes, GitOps, Python, Go, TypeScript, Observability, Zero-trust security, Architecture diagrams, Real-time systems, Hybrid cloud, CQRS, Documentation standards

Posted 3 days ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

About the Company: Ticker Data Limited is one of the leading global content providers in the financial information services industry that integrates and disseminates ultra-low latency data feeds, news and information. Real-time market data and information is distributed in a user friendly and flexible format on Ticker’s own state-of-the-art platform as well as on third-party websites, including mobile Phones, at competitive prices. Ticker’s adoption of open technology standards allows it to integrate content with rich features and analytical tools, enhancing customer experience through customized delivery and display of data and tools. Resilient data management system and dedicated teams of information and technology specialists ensure the highest standards of data security, completeness, quality and authentication. Location: Mumbai, Andheri (E) Vision: To be anOrganization of Choice for the Financial Markets, providing Financial Information, News and Analytics while delivering Superior Value to all Stakeholders. Mission: To empower Financial Markets by providing a high degree of reliable real-time customized information and to offer a UNIQUE customer experience by providing seamless service across markets. Website : https://www.tickermarket.com/#/home Experience: 3 Years (Minimum) Key Responsibilities: 1. Managing SQL Server Databases 2. Configurating and Maintaining databases and processes 3. Maintaining System Health and performances 4. Ensuring high level of performance , availability, sustainability and security 5. Analyzing, solving and correcting issues in real time 6. Proving suggestions for solutions 7. Refining and automation regular processes. tracking issues. 8. Query Performance and Tuning 9. Providing 24*7 support for critical LIVE Production system

Posted 3 days ago

Apply

0.0 - 1.0 years

0 Lacs

Satellite, Ahmedabad, Gujarat

On-site

Rust Developer (Scalable Systems) Experience: 3+ years Location: Ahmedabad, Gujarat Employment Type: Full-time Key Responsibilities: Design, develop, and optimize high-performance backend services using Rust , targeting 1000+ orders per second throughput. Implement scalable architectures with load balancing for high availability and minimal latency. Integrate and optimize Redis for caching, pub/sub, and data persistence. Work with messaging services like Kafka and RabbitMQ to ensure reliable, fault-tolerant communication between microservices. Develop and manage real-time systems with WebSockets for bidirectional communication. Write clean, efficient, and well-documented code with unit and integration tests. Collaborate with DevOps for horizontal scaling and efficient resource utilization. Diagnose performance bottlenecks and apply optimizations at the code, database, and network level. Ensure system reliability, fault tolerance, and high availability under heavy loads. Required Skills & Experience: 3+ years of professional experience with Rust in production-grade systems. Strong expertise in Redis (clustering, pipelines, Lua scripting, performance tuning). Proven experience with Kafka , RabbitMQ , or similar messaging queues. Deep understanding of load balancing, horizontal scaling , and distributed architectures. Experience with real-time data streaming and WebSocket implementations. Knowledge of system-level optimizations, memory management, and concurrency in Rust. Familiarity with high-throughput, low-latency systems and profiling tools. Understanding of cloud-native architectures (AWS, GCP, or Azure) and containerization (Docker/Kubernetes). Preferred Qualifications: Experience with microservices architecture and service discovery . Knowledge of monitoring & logging tools (Prometheus, Grafana, ELK). Exposure to CI/CD pipelines for Rust-based projects. Experience in security and fault-tolerant design for financial or trading platforms (nice to have). Job Types: Full-time, Permanent Experience: Rust Developer: 1 year (Required) Work Location: In person

Posted 3 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

Remote

What you bring to the table (Core Requirements): 4+ years of Java development within an enterprise-level domain Java 8 (11 preferred) features like lambda expressions, Stream API, CompletableFuture, etc. Skilled with low-latency, high volume application development Team will need expertise in CI/CD, and shift left testing Nice to have Golang and/or Rust Experienced with asynchronous programming, multithreading, implementing APIs, and Microservices, including Spring Boot Proficiency with SQL Experience with data sourcing, data modeling and data enrichment Experience with Systems Design & CI/CD pipelines Cloud computing, preferably AWS Solid verbal and written communication and consultant/client-facing skills are a must. As a true consultant, you are a self-starter who takes initiative. Solid experience with at least two (preferably more) of the following: Kafka (Core Concepts, Replication & Reliability, Kafka Internals, Infrastructure & Control, Data Retention and Durability) MongoDB Sonar Jenkins Oracle DB, Sybase IQ, DB2 Drools or any rules engine experience CMS tools like Adobe AEM Search tools like Algolia, ElasticSearch or Solr Spark What makes you stand out from the pack: Payments or Asset/Wealth Management experience Mature server development and knowledge of frameworks, preferably Spring Enterprise experience working and building enterprise products, long term tenure at enterprise-level organizations, experience working with a remote team, and being an avid practitioner in their craft You have pushed code into production and have deployed multiple products to market, but are missing the visibility of a small team within a large enterprise technology environment. You enjoy coaching junior engineers, but want to remain hands-on with code. Open to work hybrid - 3 days per week from office

Posted 3 days ago

Apply

3.0 years

0 Lacs

Faridabad, Haryana, India

On-site

While applying mention JOB ID :: 0791 in email subject Experience : 3+ years Employment Type: Full-Time Job Overview : We are seeking a highly skilled C++ Software Developer to join our team in developing colocation server software optimized for high-frequency trading (HFT) and low-latency execution. The ideal candidate will have extensive experience in developing high-performance applications in C++, particularly in the financial or trading domain. This role requires close collaboration with quantitative traders and algorithmic teams to design and implement efficient trading execution systems. Key Responsibilities: Develop and maintain low-latency, high-throughput colocation server software for trading execution. Optimize C++ code for performance, with a focus on minimizing execution time and maximizing throughput. Collaborate with trading and infrastructure teams to ensure seamless integration of trading algorithms with the execution system. Implement market data feed handlers and order routing protocols (FIX, ITCH, OUCH, etc.) to interact with exchanges and brokers. Develop tools for real-time monitoring, risk management, and performance analysis. Ensure systems are robust, fault-tolerant, and able to recover quickly from failures. Maintain and improve colocation infrastructure to ensure minimal downtime and fast execution speeds. Conduct rigorous testing, including unit tests and performance benchmarking. Stay updated on the latest developments in trading technology and exchange protocols to continuously enhance system performance. Key Requirements: Strong expertise in C++ programming, including experience with multi-threading, memory management, and real-time systems. Proven experience in developing low-latency software for high-frequency trading or other performance-critical applications. Knowledge of networking protocols (TCP/IP, UDP) and experience with socket programming. Experience with market data feeds and financial exchange connectivity protocols such as FIX, ITCH, OUCH. Deep understanding of operating system internals (Linux) and optimization for trading systems. Experience with profiling and performance tuning C++ code. Familiarity with colocation and data center environments for financial trading. Strong analytical and problem-solving skills. Experience working in financial services, especially in a trading infrastructure environment, is a plus. Bachelor’s degree in Computer Science, Engineering, or related field. Preferred Skills: Experience in algorithmic trading systems. Knowledge of exchange APIs and order management systems (OMS). Familiarity with GPU programming and hardware acceleration (FPGA experience is a plus). Exposure to Python or other scripting languages for quick automation tasks. Understanding of financial markets, asset classes (equities, derivatives, forex), and trading strategies. What We Offer: Competitive salary with performance-based bonuses. An opportunity to work with cutting-edge technology in a fast-paced, high-stakes environment. Collaborative and innovative culture. Opportunities for professional growth and development. hr@byllscatchsecurities.com guru@bullscatchsecurities.com While applying mention JOB ID :: 0791 in email subject

Posted 3 days ago

Apply

2.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary: We are seeking a highly motivated and skilled LLM Engineer with 2 to 7 years of professional experience to join our growing AI team. The ideal candidate will have a strong background in natural language processing, machine learning, and hands-on experience in developing, deploying, and optimizing solutions built upon Large Language Models. You will play a crucial role in designing, implementing, and maintaining robust and scalable LLM-powered applications, contributing to the full lifecycle of our AI products. Key Responsibilities: LLM Application Development: Design, develop, and deploy innovative applications leveraging state-of-the-art Large Language Models (e.g., GPT, Llama, Gemini, Claude, etc.). This includes working with various LLM APIs and open-source models. Prompt Engineering & Optimization: Develop and refine advanced prompt engineering techniques to maximize LLM performance, accuracy, and desired output for specific use cases. Fine-tuning & Adaptation: Experiment with and implement strategies for fine-tuning pre-trained LLMs on custom datasets to improve performance for domain-specific tasks. Data Preparation & Curation: Work with diverse datasets for training, fine-tuning, and evaluating LLMs, ensuring data quality, relevance, and ethical considerations. Model Evaluation & Benchmarking: Design and execute robust evaluation methodologies to assess LLM performance, identify biases, and ensure alignment with business objectives. Implement A/B testing and other experimentation frameworks. Integration & Deployment: Integrate LLM-powered solutions into existing systems and deploy them to production environments, ensuring scalability, reliability, and low latency. Experience with MLOps practices is highly desirable. Performance Optimization: Identify and implement strategies for optimizing LLM inference, resource utilization, and cost efficiency. Research & Innovation: Stay abreast of the latest advancements in LLMs, NLP, and machine learning research. Proactively explore and propose new technologies and approaches to enhance our AI capabilities. Collaboration: Work closely with cross-functional teams including data scientists, software engineers, product managers, and researchers to deliver impactful AI solutions. Documentation: Create clear and concise documentation for models, code, and deployment procedures. Required Qualifications: Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Computational Linguistics, or a related quantitative field. 2-7 years of professional experience in a role focused on Machine Learning, Natural Language Processing, or AI development. Strong proficiency in Python and relevant ML/DL frameworks (e.g., TensorFlow, PyTorch, Hugging Face Transformers). Hands-on experience with Large Language Models (LLMs) , including familiarity with their architectures (e.g., Transformers) and practical application. Experience with prompt engineering techniques and strategies. Solid understanding of NLP concepts, including text pre-processing, embeddings, semantic search, and information retrieval. Familiarity with cloud platforms (AWS, GCP, Azure) and their AI/ML services. Experience with version control systems (e.g., Git). Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication and interpersonal skills to articulate complex technical concepts to both technical and non-technical audiences. Preferred Qualifications (Bonus Points for): Experience with fine-tuning LLMs on custom datasets. Familiarity with MLOps tools and practices (e.g., MLflow, Kubeflow, Docker, Kubernetes). Experience with vector databases (e.g., Pinecone, Weaviate, Milvus) for RAG applications. Knowledge of various retrieval techniques for Retrieval Augmented Generation (RAG) systems. Understanding of ethical AI principles, bias detection, and fairness in LLMs. Contributions to open-source projects or relevant publications. Experience with distributed computing frameworks.

Posted 3 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of core and business-aligned teams, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here. Asset Management Goldman Sachs Asset Management delivers innovative investment solutions through a global, multi-product platform that offers clients the advantages that come with working with a large firm, while maintaining the benefits of a boutique. We are a top 10 global asset manager with a leadership position across asset classes and key market segments. Our success is driven by a global team of talented professionals who collaborate to deliver innovative client solutions. Quantitative Strategists Quantitative strategists work in close collaboration with bankers, traders, and portfolio managers on complex financial and technical challenges. We work on alpha generating strategies; discuss portfolio allocation problems; and build models for prediction, pricing, trading automation, data analysis and more. The strats platform is designed for people to express themselves by providing creative solutions to business problems. Strats own analytics, models for pricing, return and risk, as well as portfolio management platform. Responsibilities As a quantitative strategist your responsibilities will include Working with revenue-generating businesses to solve a broad range of problems, including quantitative strategy development, quantitative modelling, portfolio construction, portfolio optimization, infrastructure development and implementation, financial product and markets analytics Develop quantitative analytics and signals using advanced statistical, quantitative, or econometric techniques to improve portfolio construction process and implement fund management models to track longer term portfolio performance Develop sustainable production systems, which can evolve and adapt to changes in our fast-paced, global business environment Provide quantitative analytics to optimize investment structure, pricing, returns and capital sourcing Partner globally across multiple divisions and engineering teams to create quantitative modeling-based solutions Prioritize across competing problems, communicate with key stakeholders Basic Qualifications Bachelor's / master's degree in a quantitative discipline with quantitative analytics/ research, financial modeling experience Strong understanding of mathematical concepts including probability and statistics, time series analysis, regression analysis, forecasting, optimization, machine learning, regression analysis, and other numerical techniques Strong fundamentals in design and analysis of algorithms, data structures Ability to implement coding solutions to quantitative problems, experience in developing finance and statistics-based applications and proficiency in at least one programming language such as Slang, Python, C, C++ Strong written, oral communication skills and ability to work in a team environment Ability to multi-task and prioritize work effectively Passion and self-motivation to deliver technology solutions in a dynamic business environment goldmansachs.com/careers Goldman Sachs Engineering Culture At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity

Posted 3 days ago

Apply

4.0 years

0 Lacs

India

Remote

Job Title: Senior Backend & DevOps Engineer (AI-Integrated Products) Location: Remote Employment Type: Full Time/Freelance/Part Time Work Hours: Flexibile Work Timing About Us: We’re building AI-powered products that seamlessly integrate technology into everyday routines. We're now looking for a Senior Backend & DevOps Engineer who can help us scale Mobile Apps globally and architect the backend for it, while owning infrastructure, performance, and AI integrations. Responsibilities: Backend Engineering: Own and scale the backend architecture (Node.js/Express or similar) for Mobile Apps Build robust, well-documented, and performant APIs Implement user management, session handling, usage tracking, and analytics Integrate 3rd-party services including OpenAI, Whisper, and other LLMs Optimize app-server communication and caching for global scale DevOps & Infrastructure: Maintain and scale AWS/GCP infrastructure (EC2, RDS, S3, CloudFront/CDN, etc) Set up CI/CD pipelines (GitHub Actions preferred) for smooth deployment Monitor performance, set up alerts, and handle autoscaling across regions Manage cost-effective global infra scaling and ensure low-latency access Handle security (IAM, secrets management, HTTPS, CORS policies, etc) AI & Model Integration: Integrate LLMs like GPT-4, Mistral, Mixtral, and open-source models Support fine-tuning, inference pipelines, and embeddings Build offline inference support and manage transcription workflows (Whisper, etc) Set up and optimize vector DBs like Qdrant, Weaviate, Pinecone Requirements: 4+ years of backend experience with Node.js, Python, or Go 2+ years of DevOps experience with AWS/GCP/Azure, Docker, and CI/CD Experience deploying and managing AI/ML pipelines, especially LLMs and Whisper Familiarity with vector databases, embeddings, and offline inference Strong understanding of performance optimization, scalability, and observability Clear communication skills and a proactive mindset Bonus Skills: Experience working on mobile-first apps (React Native backend knowledge is a plus) Familiarity with Firebase, Vercel, Railway, or similar platforms Knowledge of data privacy, GDPR, and offline sync strategies Past work on productivity, journaling, or health/fitness apps Experience self-hosting LLMs or optimizing AI pipelines on edge/cloud Please share(Optional): A brief intro about you and your experience Links to your GitHub/portfolio or relevant projects Resume or LinkedIn profile Any AI/infra-heavy work you’re particularly proud of Contact : subham@binaryvlue.com

Posted 3 days ago

Apply

6.0 years

0 Lacs

India

On-site

Buil Building and Deploying ML Models Design, build, optimize, deploy and monitor machine learning models for production use cases. Ensure scalability, reliability, and efficiency of ML pipelines across cloud and on-prem environments. Work with data engineers to design data pipelines that feed into ML models. Optimize model performance, ensuring low latency and high accuracy. Leading and Architecting ML Solutions Lead a team of ML Engineers, providing technical mentorship and guidance. Architect ML solutions that integrate seamlessly with business applications. Ensure models are explainable, auditable, and aligned with business goals. Drive best practices in MLOps, CI/CD, and model monitoring. Collaborating and Communicating Work closely with business stakeholders to understand problem statements and define ML-driven solutions. Collaborate with software engineers, data engineers, platform engineers and product managers to integrate ML models into production systems. Present technical concepts to non-technical stakeholders in an easy-to-understand manner. What We’re Looking For: Machine Learning Expertise Deep understanding of supervised and unsupervised learning, deep learning, and NLP techniques, and large language models (LLMs). Experience in training, fine-tuning, and deploying ML and LLM models at scale. Proficiency in ML frameworks such as TensorFlow, PyTorch, Scikit-learn etc. Production and Cloud Deployment Hands-on experience deploying models to AWS, GCP, or Azure. Understanding of MLOps, including CI/CD for ML models, model monitoring, and retraining pipelines. Experience working with Docker, Kubernetes, or serverless architectures is a plus. Data Handling Strong programming skills in Python. Proficiency in SQL and working with large-scale datasets. Familiarity with distributed computing frameworks like Spark or Dask is a plus. Leadership and Communication Ability to lead and mentor a team of ML Engineers and collaborate effectively across functions. Strong communication skills to explain technical concepts to business teams. Passion for staying updated with the latest advancements in ML and AI. Experience Needed: 6+ years of experience in machine learning engineering or related roles. Experience in deploying and managing ML and LLM models in production. Proven track record of working in cross-functional teams and leading ML projects.

Posted 3 days ago

Apply

5.0 years

18 - 25 Lacs

Hyderabad, Telangana, India

On-site

Role: Senior .NET Engineer Experience: 5-12 Years Location: Hyderabad This is a WFO (Work from Office) role. Mandatory Skills: Dot Net Core, C#, Kafka, CI/CD pipelines, Observability tools, Orchestration tools, Cloud Microservices Interview Process First round - Online test Second round - Virtual technical discussion Manager/HR round - Virtual discussion Required Qualification Company Overview It is a globally recognized leader in the fintech industry, delivering cutting-edge trading solutions for professional traders worldwide. With over 15 years of excellence, a robust international presence, and a team of over 300+ skilled professionals, we continually push the boundaries of technology to remain at the forefront of financial innovation. Committed to fostering a collaborative and dynamic environment, our prioritizes technical excellence, innovation, and continuous growth for our team. Join our agile-based team to contribute to the development of advanced trading platforms in a rapidly evolving industry. Position Overview We are seeking a highly skilled Senior .NET Engineer to play a pivotal role in the design, development, and optimization of highly scalable and performant domain-driven microservices for our real-time trading applications. This role demands advanced expertise in multi-threaded environments, asynchronous programming, and modern software design patterns such as Clean Architecture and Vertical Slice Architecture. As part of an Agile Squad, you will collaborate with cross-functional teams to deliver robust, secure, and efficient systems, adhering to the highest standards of quality, performance, and reliability. This position is ideal for engineers who excel in building low-latency, high-concurrency systems and have a passion for advancing fintech solutions. Key Responsibilities System Design and Development Architect and develop real-time, domain-driven microservices using .NET Core to ensure scalability, modularity, and performance. Leverage multi-threaded programming techniques and asynchronous programming paradigms to build systems optimized for high-concurrency workloads. Implement event-driven architectures to enable seamless communication between distributed services, leveraging tools such as Kafka or AWS SQS. System Performance and Optimization Optimize applications for low-latency and high-throughput in trading environments, addressing challenges related to thread safety, resource contention, and parallelism. Design fault-tolerant systems capable of handling large-scale data streams and real-time events. Proactively monitor and resolve performance bottlenecks using advanced observability tools and techniques. Architectural Contributions Contribute to the design and implementation of scalable, maintainable architectures, including Clean Architecture, Vertical Slice Architecture, and CQRS. Collaborate with architects and stakeholders to align technical solutions with business requirements, particularly for trading and financial systems. Employ advanced design patterns to ensure robustness, fault isolation, and adaptability. Agile Collaboration Participate actively in Agile practices, including Scrum ceremonies such as sprint planning, daily stand-ups and retrospectives.. Collaborate with Product Owners and Scrum Masters to refine technical requirements and deliver high-quality, production-ready software. Code Quality and Testing Write maintainable, testable, and efficient code adhering to test-driven development (TDD) methodologies. Conduct detailed code reviews, ensuring adherence to best practices in software engineering, coding standards, and system architecture. Develop and maintain robust unit, integration, and performance tests to uphold system reliability and resilience. Monitoring and Observability Integrate Open Telemetry to enhance system observability, enabling distributed tracing, metrics collection, and log aggregation. Collaborate with DevOps teams to implement real-time monitoring dashboards using tools such as Prometheus, Grafana, and Elastic (Kibana). Ensure systems are fully observable, with actionable insights into performance and reliability metrics. Required Expertise- Technical Expertise And Skills 5+ years of experience in software development, with a strong focus on .NET Core and C#. Deep expertise in multi-threaded programming, asynchronous programming, and handling concurrency in distributed systems. Extensive experience in designing and implementing domain-driven microservices with advanced architectural patterns like Clean Architecture or Vertical Slice Architecture. Strong understanding of event-driven systems, with knowledge of messaging frameworks such as Kafka, AWS SQS, or RabbitMQ. Proficiency in observability tools, including Open Telemetry, Prometheus, Grafana, and Elastic (Kibana). Hands-on experience with CI/CD pipelines, containerization using Docker, and orchestration tools like Kubernetes. Expertise in Agile methodologies under Scrum practices. Solid knowledge of Git and version control best practices. Beneficial Skills Familiarity with Saga patterns for managing distributed transactions. Experience in trading or financial systems, particularly with low-latency, high-concurrency environments. Advanced database optimization skills for relational databases such as SQL Server. Certifications And Education Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Relevant certifications in software development, system architecture, or AWS technologies are advantageous. Why Join? Exceptional team building and corporate celebrations Be part of a high-growth, fast-paced fintech environment. Flexible working arrangements and supportive culture. Opportunities to lead innovation in the online trading space. Skills: grafana,open telemetry,elastic (kibana),event-driven architectures,multi-threaded programming,vertical slice architecture,asynchronous programming,c#,cqrs,clean architecture,kubernetes,orchestration tools,cloud microservices,test-driven development (tdd),kafka,.net core,git,event-driven architecture,scrum practices,dot net core,observability tools,ci/cd pipeline,ci/cd pipelines,containerization using docker,tdd,.net,docker,prometheus,agile methodologies,event-driven systems,aws sqs

Posted 3 days ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

To develop a secure, low-latency Google Home integration system that connects voice commands to Firebase Realtime Database operations, enabling users to control smart devices like the Neon SmartPlug through natural speech. --- 🔧 Scope of Work: 1. Google Assistant Integration Create an Action on Google project (using Dialogflow or latest Actions SDK / Google Smart Home platform). Enable account linking via OAuth2 / Firebase Authentication OTP Verification. Implement voice intents for: Turning devices ON/OFF. Setting schedules or timers. Fetching status of a device. Custom interactions (e.g. “Is my plug on?”) 2. Firebase Realtime Database Integration Sync device states in Firebase Realtime Database. Set up secure and cost-efficient data structure. Implement optimized Cloud Functions for: Intent fulfillment. Updating device state. Fetching real-time status. Logging user actions (optional analytics). 3. Cloud Functions (Node.js / TypeScript) Firebase Realtime Database Integration Write backend code to: Parse and respond to Assistant requests. Validate user sessions (via uid and linked identity). Prevent race conditions with concurrent writes. Handle fallback or unknown commands. 4. Firebase Security & User Validation Define Firebase Rules to restrict read/write based on: User uid Device ownership Action scope Ensure cross-user access is completely blocked. Implement access token validation. 5. Multi-user & Multi-device Support Support simultaneous sessions. Structure DB nodes for each user with isolation: /users/{uid}/devices/{device_id}/status 6. Testing & Validation Unit test Cloud Functions. Test integration with: Multiple Google Accounts Google Home and Android devices

Posted 3 days ago

Apply

7.0 years

0 Lacs

India

On-site

About Us: At Minutes to Seconds, we match people having great skills with tailor-fitted jobs to achieve well-deserved success We know how to match people to the right job roles to create that perfect fit. This changes the dynamics of business success and catalyses the growth of an individual. We’re passionate about doing an incredible job for our clients and job seekers. The success of individuals at the workplace determines our success. Scope : We’re looking for a Rancher Kubernetes expert to lead the design, automation, and reliability of our on-prem and hybrid container platform. Sitting at the intersection of the Platform Engineering and Infrastructure Reliability teams, this role owns the lifecycle of Rancher-managed clusters—from bare-metal provisioning and performance tuning to observability, security, and automated operations. You’ll apply SRE principles to ensure high availability, scalability, and resilience across environments supporting mission-critical workloads. Core Responsibilities: Platform & Infrastructure Engineering Design, deploy, and maintain Rancher-managed Kubernetes clusters (RKE2/K3s) at enterprise scale. Architect highly available clusters integrated with on-prem infrastructure: UCS, VxLAN, storage, DNS, and load balancers. Lead Rancher Fleet implementations for GitOps-driven cluster and workload management. Performance Engineering & Optimization Tune clusters for high-performance workloads on bare-metal hardware , optimizing CPU, memory, and I/O paths. Align cluster scheduling and resource profiles with physical infrastructure topologies (NUMA, NICs, etc.). Optimize CNI, kubelet, and scheduler settings for low-latency, high-throughput applications. Security & Compliance Implement security-first Kubernetes patterns: RBAC, Pod Security Standards, network policies, and image validation. Drive left-shifted security using Terraform, Helm, and CI/CD pipelines; align to PCI, FIPS, and CIS benchmarks. Lead infrastructure risk reviews and implement guardrails for regulated environments. Automation & Tooling Build and maintain IaC stacks using Terraform, Helm, and Argo CD. Develop platform automation and observability tooling using Python or Go Ensure declarative management of infrastructure and applications through GitOps pipelines SRE & Observability. Apply SRE best practices for platform availability, capacity, latency, and incident response. Operate and tune Prometheus, Grafana, and ELK/EFK stacks for complete platform observability. Drive actionable alerting, automated recovery mechanisms, and clear operational documentation. Lead postmortems and drive systemic improvements to reduce MTTR and prevent recurrence. Required Skills · 7+ years in infrastructure, platform, or SRE roles · Deep hands-on experience with Rancher (RKE2/K3s) in production environments · Proficient with Terraform, Helm, Argo CD, Python, and/or Go · Demonstrated performance tuning in bare-metal Kubernetes environments (UCS, VxLAN, MetalLB) · Expert in Linux systems (systems, networking, kernel tuning), Kubernetes internals, and container runtimes · Real-world application of SRE principles in high-stakes, always-on environments · Strong background operating Prometheus, Grafana, and Elasticsearch/Fluentd/Kibana (ELK/EFK) stacks Preferred Qualifications · Experience integrating Kubernetes with OpenStack and Magnum · Knowledge of Rancher add-ons: Fleet, Longhorn, CIS Scanning · Familiarity with compliance-driven infrastructure (PCI, FedRAMP, SOC2) · Certifications: CKA, CKS, or Rancher Kubernetes Administrator · Strategic thinker with strong technical judgment and execution ability · Calm and clear communicator, especially during incidents or reviews · Mentorship-oriented; supports team learning and cross-functional collaboration · Self-motivated, detail-oriented, and thrives in a fast-moving, ownership-driven culture

Posted 3 days ago

Apply

0 years

0 Lacs

India

Remote

Quantitative Developer – Alpha Research & Systems Engineering (Python) Company: Valbonne Capital LLC Location: Remote (Global applications welcome) About the Role: We’re looking for a world-class Quantitative Developer who thrives at the intersection of alpha research, trading logic, and high-performance system design . You’ll work directly with the founders to develop and test trading strategies across asset classes, engineer performant infrastructure, and help scale our internal quant platform. This is not an execution-only dev job, we’re seeking a visionary : someone who understands markets deeply, is obsessed with edge, and can think and build independently. Who You Are: You live and breathe markets, not just technicals or code, but actual market structure . You’re fluent in options, payoffs, arbitrage, order books , and alpha hypothesis testing . You’re equally comfortable writing research backtests and deploying multithreaded Python systems that move real capital. You don’t wait for instructions,you bring bold ideas, test them fast, and iterate. You’ve probably been called the “smartest person in the room” more than once. Requirements: Strong capital markets knowledge : Equities, options, arbitrage mechanics, and structured payoffs Python engineering expertise : multithreading/async, low-latency data handling, Redis integration. C++ is a PLUS! Ability to design and run custom backtests (vectorized and event-driven) Solid experience with external APIs , including trading/broker/data APIs Comfort working in fast-paced, self-directed environments Bonus Points For: Prior alpha generation or profitable live deployments Experience with statistical modeling or ML in a quant context Experience at a prop firm, quant hedge fund, or crypto-native trading firm Deep experience with event-driven systems or order book simulation Why Valbonne Capital? We’re not building a clone of what already exists. We’re pursuing asymmetric, underexploited trading strategies and building high-leverage infrastructure to execute them. If you’ve been looking for a seat at the table to actually shape strategy , this is it. Apply if you’re ready to operate at the edge of alpha, execution, and creativity.

Posted 3 days ago

Apply

0 years

0 Lacs

India

On-site

About Valorant Valorant is a fast-growing procurement consulting and technology firm, partnering with leading global organizations to solve complex business challenges through data-driven strategies, digital transformation, and technology-enabled solutions. We are building innovative tools and platforms to drive real impact for our clients. As we scale, we’re looking for a passionate Full Stack Engineer to help us architect, build, and deliver enterprise-grade SaaS products—working at the intersection of UX, data, and cutting-edge backend systems. Key Responsibilities Collaborate closely with Founders, Partners, designers, and fellow engineers to design and implement robust, scalable, and secure full-stack web applications. Architect, build, and maintain system infrastructure for dynamic workflow and form-based SaaS tools, ensuring seamless integration between frontend and backend. Design, develop, and optimize relational and/or NoSQL databases for efficient storage, querying, and retrieval of application and user data. Build and maintain efficient, secure, and well-documented RESTful/GraphQL APIs to power frontend interfaces and external integrations. Integrate with third-party services and APIs—including authentication (SSO, OAuth), HRIS, ERP, communication tools such as Slack, and document management solutions. Ensure high performance, responsiveness, and reliability of the entire application stack, proactively identifying and resolving bottlenecks. Implement and uphold industry best practices in code quality, testing, CI/CD, and DevOps to enable fast and safe product iterations. Key Requirements Front-end: Next.js/React.js, TypeScript/JavaScript (ES6+), Redux, Tailwind/shadcn-ui Back-end: Node.js / Express.js, RESTful and/or GraphQL APIs Database: Redis/ PostgreSQL / MySQL Familiarity with async programming, message queues, and background job processing Working knowledge of established software design patterns, efficient data structures, and code optimization Good understanding of distributed systems, microservices, and modular architecture principles Strong analytical and troubleshooting skills, quick learner, and self-driven Preferred Skills Experience with cloud platforms (AWS, Azure, or GCP) Familiarity with DevOps tools (Docker, GitHub Actions, CI/CD pipeline etc.) Exposure to any of data visualization/ visual flow design libraries (Chart.js, D3.js, React Flow, Join JS) Experience designing low-latency, high-availability, and high-performance backend applications Basic knowledge of Agile/Scrum methodology Prior experience working with startups or in fast-paced startup environments

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies