Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
4 - 9 Lacs
Hyderābād
On-site
Overview: WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance_ Responsibilities: Senior Software Architect – GPU Kernel Optimization & Distributed AI Systems THE TEAM: Join AMD’s high-impact team at the heart of innovation in AI, ML, and high-performance computing (HPC). We’re a collaborative group of software architects and GPU engineers focused on pushing the boundaries of AI model performance across distributed, GPU-accelerated platforms. Our work drives the next generation of AMD’s AI software stack, enabling large-scale machine learning training and inference workloads in data centers and enterprise environments. THE ROLE: As a Senior Software Developer, you will develop both GPU kernel-level optimization and distributed software efforts for large-scale AI workloads. This is a technical leadership role with direct influence over critical software components in AMD’s AI stack. You’ll architect and implement optimized compute kernels, guide software teams through the full product lifecycle, and work closely with internal and external partners to deploy scalable, high-performance solutions. THE PERSON: We’re looking for a highly skilled, deep systems thinker who thrives in complex problem domains involving parallel computing, GPU architecture, and AI model execution. You are confident leading software architecture decisions and know how to translate business goals into robust, optimized software solutions. You’re just as comfortable writing performance-critical code as you are guiding agile development teams across product lifecycles. Ideal candidates have a strong balance of low-level programming, distributed systems knowledge, and leadership experience—paired with a passion for AI performance at scale. KEY RESPONSIBILITIES: GPU Kernel Optimization : Develop and optimize GPU kernels to accelerate inference and training of large machine learning models while ensuring numerical accuracy and runtime efficiency. Multi-GPU and Multi-Node Scaling: Architect and implement strategies for distributed training/inference across multi-GPU/multi-node environments using model/data parallelism techniques. Performance Profiling: Identify bottlenecks and performance limitations using profiling tools; propose and implement optimizations to improve hardware utilization. Parallel Computing : Design and implement multi-threaded and synchronized compute techniques for scalable execution on modern GPU architectures. Benchmarking & Testing: Build robust benchmarking and validation infrastructure to assess performance, reliability, and scalability of deployed software. Documentation & Best Practices: Produce technical documentation and share architectural patterns, code optimization tips, and reusable components. PREFERRED EXPERIENCE: Software Team Leadership Collaboration with customers and business units to define deliverables and roadmaps. Interfacing with executive leadership on program progress and strategic planning. Experience in production-level software deployment (e.g., upstreaming to open source, commercial rollouts). Software Architecture Deep experience with GPU kernel optimization in C++12/17/20 . Working knowledge of frameworks such as PyTorch, vLLM, Cutlass, Kokkos . Practical expertise in CPU/GPU architecture and system-level performance tuning. Proficiency in Python scripting and infrastructure automation. Application of software design patterns and industry-standard engineering practices. GPU & Low-Level Optimization Hands-on experience with CUDA and low-level GPU programming. Kernel optimization in assembly and tight loops for latency-sensitive code. Proficiency with performance profiling tools (Nsight, VTune, Perf, etc.). Experience with distributed computing strategies in AI environments (multi-GPU, NCCL, MPI). Strong debugging, problem-solving, and performance tuning skills in complex systems. ACADEMIC CREDENTIALS: Bachelor’s or Master’s degree in Computer Engineering, Electrical Engineering, Computer Science, or a related technical field. Advanced degrees or published work in HPC, GPU computing, or AI systems is a plus. #LI-NR1 Qualifications: Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
Posted 4 hours ago
4.0 years
3 - 4 Lacs
Hyderābād
Remote
Overview: As an Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities: Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. • Governs data design/modeling – documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications: 4+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).
Posted 4 hours ago
12.0 years
0 Lacs
Hyderābād
On-site
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. We’re Salesforce, the Customer Company, inspiring the future of business with AI + Data + CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. What You Will Be Doing Come help us shape the future of Commerce Search! As part of one of the fastest growing clouds at Salesforce, the Commerce Search teams have an exciting long-term roadmap dedicated to building scalable services that will power searching for products and content amongst the world's most trusted brands. Since search is the entry point for most commerce shoppers, we play a critical role in our customers’ success. We are a collaborative team that strives for quality in product, software engineering best practices, and focus on impact. We are looking for a Principal Software Engineer to join a new team in helping building out and scale our Search platform. Responsibilities/Duties Lead feature development throughout the software lifecycle: gather requirements, design, build, test, and deploy Help scale our search services which need to support low latency and high throughput Have experience with reliably delivering high-quality, distributed, testable, and scalable software Mentor others in development technologies, tools, and processes Present your own designs to internal/external groups and review designs of others Develop test strategies, design automation frameworks, write unit/functional tests to drive up code coverage and automation metrics Service ownership operate the service, improve supportability, and help resolve customer issues Improve developer productivity through tooling and software engineering best practices Participate in the team’s on-call rotation to address sophisticated problems in real-time and keep services operational and highly available Skills and Experience Bachelor’s degree in Computer Science, Engineering, or a related field—or equivalent practical experience. A good understanding of object-oriented programming and experience in at least one object-oriented programming language, e.g. Java, C++. At least 12 years of object oriented programming experience Experience using Git / GitHub Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Previous experience with SaaS development Team player, get-it-done attitude and quick learner Understanding of Agile/Scrum methodologies Experience with Search technologies like Lucene, Elasticsearch, Solr is a huge plus! Experience working on building Search Platforms/features is a huge plus! Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.
Posted 4 hours ago
40.0 years
1 - 4 Lacs
Hyderābād
On-site
It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description Machine Learning Engineer – RAG & Fine-Tuning This role requires working from our local Hyderabad office 2-3x a week. Location: Hyderabad, Telangana, India ABOUT ABC FITNESS ABC Fitness (ABC) is the global market leader in providing technology solutions to the fitness industry. Built on a 40+ year reputation of excellence, ABC helps fitness providers of all sizes and backgrounds to turn their visions into seamless reality. Founded in 1981, ABC serves 40 million+ members globally, processing over $11B+ in payments annually for 31,000 clubs across 92+ countries. Our integrated suite includes best-of-breed platforms: Evo, Glofox, Ignite, and Trainerize. As a Thoma Bravo portfolio company, ABC is backed by the leading private equity firm focused on enterprise software. Learn more at abcfitness.com . ABOUT THE TEAM The AI Platform Engineering team at ABC builds scalable, high-performance AI systems that power next-generation fitness technology. We specialize in retrieval-augmented generation (RAG) architectures and fine-tuning methodologies to deliver context-aware, cost-efficient AI solutions. As our Machine Learning Engineer, you will be responsible for all retrieval and intelligence behind the LLM, delivering performant, low-cost, high-context AI features. At ABC, we love entrepreneurs because we are entrepreneurs. We roll our sleeves up, we act fast, and we learn together. WHAT YOU’LL DO Handle embeddings and chunking strategies to optimize document and data retrieval for GenAI-powered features. Manage vector stores and retrieval workflows using leading vector databases (Pinecone, FAISS, Weaviate, Azure AI Search) to ensure efficient, scalable access to unstructured and structured data. Fine-tune small and large language models using frameworks such as HuggingFace and OpenAI APIs, tailoring models to domain-specific requirements and improving performance on targeted tasks. Optimize cost and reduce latency by implementing best practices for token management, model evaluation, and cloud resource utilization. Collaborate with engineering, product, and data teams to integrate RAG pipelines into production systems, ensuring reliability, scalability, and security. Stay up-to-date with the latest advancements in retrieval-augmented generation, vector search, and LLM fine-tuning, applying new techniques to improve system performance and user experience. WHAT YOU’LL NEED 4–7 years of experience in machine learning or AI engineering, with a proven track record in RAG, vector search, and LLM fine-tuning. Deep expertise with vector databases such as Pinecone, FAISS, Weaviate, or Azure AI Search, including experience designing retrieval workflows and managing embeddings. Familiarity with HuggingFace and OpenAI fine-tuning APIs, and strong understanding of chunking strategies for optimizing retrieval. Proficiency in Python and experience with ML frameworks (PyTorch, TensorFlow) and cloud platforms (AWS, Azure). Understanding of token management, evaluation tuning, and cost optimization for large-scale AI deployments. Strong problem-solving skills, a collaborative mindset, and the ability to communicate complex technical concepts to both technical and non-technical stakeholders. AND IT’S GREAT TO HAVE Experience with NLP, NLU, and NLG techniques for conversational AI or information retrieval. Exposure to ML Ops tools for model monitoring, evaluation, and deployment (ML flow, Weights & Biases). Experience with model compression, quantization, or other efficiency techniques. Certifications in AWS Machine Learning Specialty or Microsoft AI Engineer. WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). #LI-HYBRID If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!
Posted 4 hours ago
2.0 - 3.0 years
6 - 7 Lacs
Hyderābād
On-site
AI Engineer: Shape the Future of Autonomous Intelligence About Teradata At Teradata, we don’t just manage data—we unlock its full potential. Our ClearScape Analytics™ platform and pioneering Enterprise Vector Store empower the world’s largest organizations to extract transformative value from their most complex data. We’re at the forefront of innovation in Artificial Intelligence, especially in the dynamic space of autonomous and agentic systems. Ready to learn from industry leaders and make a real impact? The Opportunity: Dive into Enterprise Agentic AI Are you a curious, motivated individual passionate about the next frontier of AI? Have you developed intelligent systems that can reason, learn, and act autonomously? Join us as an AI Engineer and help shape the future of enterprise intelligence using agentic AI. You’ll collaborate with other experienced AI engineers and data scientists on cutting-edge projects that redefine how businesses harness data. This is a hands-on opportunity to apply your expertise and contribute to the development of intelligent agents and agentic AI workflows that drive insights and automate complex engineering flows. What You’ll Do Improve Engineering Efficiency : Collaborate with a passionate team of engineers to explore diverse engineering processes and identify opportunities to enhance efficiency using agentic AI Build Agentic Systems : Contribute to the design, implementation, and testing of components for robust AI agents and multi-agent orchestration. Leverage Vector Stores : Work with Teradata’s Enterprise Vector Store to develop intelligent retrieval-augmented generation (RAG) pipelines. Work with Real-World Data : Gain experience processing large-scale, complex datasets within the Teradata ecosystem. Research & Prototype : Engage with the latest research in agentic AI, prompt engineering, and autonomous systems to prototype innovative ideas. System Integration : Help integrate LLM-based agents with retrieval tools, structured/unstructured inputs, and downstream Teradata products. Who You Are 2-3 years of experience in developing solutions leveraging LLMs for complex business processes. Hands-on experience in developing and deploying agentic AI systems that automate and optimize manual engineering workflows by leveraging orchestration frameworks, or multi-step reasoning workflows. Holding Master’s or Ph.D. in Data Science, Artificial Intelligence, or a related field. Hand-on experience with LLM APIs (e.g., OpenAI, Claude, Gemini) and agent toolkits (e.g., AgentBuilder, AutoGen, LangGraph, CrewAI). Understanding of chain-of-thought reasoning, prompt tuning, or context window management. Knowledge of evaluation metrics for agent performance, latency, and reliability. Hands-on with Python and Cloud automation. Having experience in developing full stack applications is a plus. Passionate about innovation and advancing the state of AI. Curious about text, data, workflows, and multimodal reasoning. Why to join Teradata? Real-World Impact : Work on meaningful projects that address complex enterprise challenges. Innovative Technology : Gain deeper experience with Agentic AI, Generative AI, and large-scale data platforms. Mentorship & Development : Learn from experienced professionals in AI, machine learning, and data engineering. Collaborative Culture : Join a supportive, inclusive team that values creativity and continuous learning. Career Progression : Build a strong portfolio and skill set for a future in advanced AI. #LI-VB1
Posted 4 hours ago
0 years
3 - 5 Lacs
Hyderābād
On-site
It's fun to work at a company where people truly believe in what they are doing! Job Description: At Epiq AI Labs, we’re building an AI-driven platform for legal discovery and knowledge management. Our mission is to transform how law firms and legal service providers manage complex litigation, process documents, and uncover actionable insights. By empowering our partners with cutting-edge technology, our new AI-driven platform enables the discovery of hidden connections, accelerates decision-making, and ultimately drives better legal outcomes. The Role Backend Software Engineers on the team design and implement scalable, high-performance systems that drive our platform's AI and data-processing capabilities. As a key player in the product lifecycle, you’ll be involved from concept to deployment, ensuring that every solution meets the unique challenges of legal professionals. You'll work closely with cross-functional teams, including product managers, frontend engineers, and legal experts, to build solutions that solve real-world problems in the legal space. Your day-to-day may include collaborating with teammates to architect new services, optimizing existing systems for performance and scalability, or designing robust APIs to power seamless user experiences. Each challenge presents an opportunity to enhance the legal workflows of our clients. Core Responsibilities Architect, develop, and maintain scalable backend systems that power AI-driven legal solutions. Establish and uphold high coding standards through best practices, code reviews, and fostering a culture of continuous learning and improvement. Design and implement APIs that enable seamless interaction between frontend applications and backend systems, while integrating external services. Optimize applications for performance, ensuring low-latency operations for data-intensive legal workflows. Design efficient data structures and algorithms to process and analyze large-scale legal datasets. Collaborate with cross-functional teams to improve user workflows and deliver cohesive, intuitive experiences across the platform. Technologies We Use Backend languages: Python, Java. Database systems: Lucene, PostgreSQL, and Qdrant. Containerization and orchestration: Docker, Kubernetes. CI/CD and build tools: GitHub Actions, CircleCI, Gradle. What We Value Deep understanding of server-side logic, distributed systems, and efficient data processing. Passion for creating user-centric solutions, ensuring seamless and intuitive workflows for legal professionals. Experience building and maintaining high-quality software in a fast-paced CI/CD environment. Strong collaboration skills, with the ability to work effectively across technical and non-technical teams. Comfort working in a dynamic environment with evolving objectives and iterative product development. What We Require Proven experience in designing, developing, and maintaining backend systems for live production environments. Strong grasp of data structures, distributed system architecture, API design for microservices, and backend best practices. Background in Computer Science, Software Engineering, or a related field. Strong written and verbal communication skills, with the ability to quickly incorporate feedback and uphold high-quality standards. Bachelor's degree in Computer Science, Software Engineering, or a related field, or equivalent experience If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! It is Epiq’s policy to comply with all applicable equal employment opportunity laws by making all employment decisions without unlawful regard or consideration of any individual’s race, religion, ethnicity, color, sex, sexual orientation, gender identity or expressions, transgender status, sexual and other reproductive health decisions, marital status, age, national origin, genetic information, ancestry, citizenship, physical or mental disability, veteran or family status or any other basis protected by applicable national, federal, state, provincial or local law. Epiq’s policy prohibits unlawful discrimination based on any of these impermissible bases, as well as any bases or grounds protected by applicable law in each jurisdiction. In addition Epiq will take affirmative action for minorities, women, covered veterans and individuals with disabilities. If you need assistance or an accommodation during the application process because of a disability, it is available upon request. Epiq is pleased to provide such assistance and no applicant will be penalized as a result of such a request. Pursuant to relevant law, where applicable, Epiq will consider for employment qualified applicants with arrest and conviction records.
Posted 4 hours ago
0 years
5 - 9 Lacs
Gurgaon
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary BizOps Engineer II We are the global technology company behind the world’s fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities. Mastercard has over 2 billion payment cards issued by 25,000+ banks across 190+ countries and territories, amassing over 10 petabytes of data. Millions of transactions are flowing to Mastercard in real-time providing an ideal environment to apply and leverage AI at scale. The AI team is responsible for building and deploying innovative AI solutions for all divisions within Mastercard securing a competitive advantage. Our objectives include achieving operational efficiency, improving customer experience, and ensuring robust value propositions of our core products (Credit, Debit, Prepaid) and services (recommendation engine, anti-money laundering, fraud risk management, cybersecurity). Job Description: This position sits in AI/ML product engineering team. The team is looking for a Platform support engineer(Level 1 support) who can help us solve and triage problems, monitor our spark on Hadoop or K8S based deployments etc. Person would be required to support daily operations with a distinct focus on triage, impact mitigation and root cause identification through a depth of knowledge for corresponding technology of products. Are you a container administration geek or like to work on Hadoop based jobs ? Have you played a key role in supporting applications (mission critical) ? Willing to scale Kubernetes based security and other administration skills ? Role Contribute to efforts to enable engineering teams responsible for implementing production readiness, engineering compliances for MasterCard AI/ML applications or products. Scale systems sustainably through mechanisms like automation, and evolve systems by pushing for changes that improve reliability and velocity. Maintain services once they are live by measuring and monitoring availability, latency and overall system health. Practice proactively Kubernetes based application deployment and environment readiness. Support deployments of code into multiple lower environments. Supporting current processes needed with an emphasis on automating everything as soon as possible. Assist the BizOps Application team and contribute where necessary at a technical level. Support a highly available, highly secure production environment running on Linux using various different open source applications Set standards and define detailed processes for effectively managing a production environment with minimal downtime Perform daily monitoring and maintenance of all software infrastructure components All About You Bachelor's degree in Information Systems, Information Technology, Computer Science or Engineering or equivalent work experience. Background on container stack and Hadoop based deployment monitoring would be extra helpful. Support deployments of code into multiple lower environments. Supporting current processes needed with an emphasis on automating everything as soon as possible. Design and implement a Git based code management strategy that will support multiple environment deployments in parallel. Experience with automation for branch management, code promotions, and version management is a plus. Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Strong understanding of all phases of a Software Development Life Cycle Deliver time-saving process improvements that will be repeatable, maintainable, valuable, and fit into MasterCard’s Business Operations vision. Self-motivated and highly collaborative Comfortable working in a dynamic and fast-paced environment Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 4 hours ago
6.0 years
22 - 24 Lacs
Gurgaon
On-site
Key Responsibilities: Technical Leadership & Design: Lead the technical design and implementation of complex software modules primarily in C/C++ and Golang for telecom applications, ensuring adherence to architectural principles and best practices. Actively participate in architectural discussions, evaluate technical options, and contribute to decisions that shape the future of our telecom platforms. Translate high-level requirements into detailed technical designs and guide the team through implementation. Software Development & Engineering: Develop, test, and debug high-performance, low-latency code in both C/C++ and Golang for mission-critical telecom systems. Write clean, efficient, and well-documented code, focusing on quality and maintainability. Integrate and optimize software components within a Linux environment, ensuring optimal resource utilization. Kubernetes & Cloud Native Operations: Design and implement solutions leveraging Kubernetes for container orchestration, deployment, and lifecycle management of telecom applications. Understand and apply cloud-native principles to optimize applications for deployment in containerized environments. Troubleshoot and resolve issues related to Kubernetes deployments, pods, services, and underlying infrastructure. Telecom Domain Application: Apply a solid understanding of telecom protocols, standards (e.g., 5G, 4G, IMS, VoIP, SDN/NFV), and network architectures to ensure solutions meet industry requirements. Address performance, reliability, and security challenges inherent in real-time telecom systems. Mentorship & Collaboration: Provide technical guidance and mentorship to junior and mid-level engineers, fostering their growth and development. Participate actively in code reviews, offering constructive feedback and ensuring adherence to coding standards. Collaborate effectively with cross-functional teams including product management, QA, and operations to ensure seamless product delivery. Performance & Optimization: Identify and resolve performance bottlenecks in C/C++ and Golang applications and Linux environments. Implement strategies for efficient resource utilization and system stability. Required Skills & Experience: 6-8 years of progressive experience in software development. Strong expertise in C/C++ programming , including object-oriented design, multi-threading, inter-process communication (IPC), and network programming. Proven hands-on experience with Golang (Go) , including concurrency patterns, Goroutines, channels, and standard library usage. Minimum 3-4 years of hands-on experience in the telecom domain , demonstrating a solid understanding of telecom protocols and architectures (e.g., SIP, RTP, Diameter, 4G/5G core network elements). Hands-on experience with Kubernetes , including deploying, managing, and troubleshooting containerized applications. In-depth experience with Linux environments , including system programming, shell scripting, debugging tools (GDB, Valgrind), and performance analysis. Experience with version control systems (e.g., Git). Solid understanding of data structures and algorithms. Excellent problem-solving, analytical, and debugging skills. Strong communication and interpersonal skills, with the ability to lead and collaborate effectively within a team. Job Type: Full-time Pay: ₹2,200,000.00 - ₹2,400,000.00 per year Schedule: Day shift Work Location: In person
Posted 4 hours ago
2.0 years
0 Lacs
Gurgaon
On-site
Who We Are as a Company: GoKwik was founded in 2020 with one simple mission, to democratise the shopping experience and increase the GMV realisation for e-commerce businesses. The company is backed by Sequoia Capital India, Matrix Partners India, RTP Global, and marquee angels. GoKwik is an e-commerce enablement company focussed predominantly on solving crucial e-commerce issues like boosting conversion rates across the e-commerce funnel and reducing RTO for our partners. It leverages AI/ML technologies to solve hard-hitting problems like RTO to increase CoD conversion rates. GoKwik's 1-click Kwik checkout improves checkout conversion rates ensuring higher GMV realisation and reduced marketing CAC. What we do is very unique with zero immediate competition in India today and therefore, building a team of Real Rockstars in their field to fuel this profit making start-up, which is growing at a rapid pace Summary: As a SDE, you will interact with the entire engineering team to solve problems and build, extend, optimise and refactor the architecture of our growing product. This will involve working on our existing codebase along with new projects. GoKwik has a highly passionate engineering-driven environment where your contribution will be felt immediately. You’ll spend time on the following: Writing testable and efficient code Design and implementation of low-latency, high-availability, and performant applications implementing business logic and developing APIs and services Build reusable code and libraries for future use. We’re Excited About You If You Have: 2+ years’ experience in software development. Hands-on experience in front-end development with React. Knowledge of frontend frameworks Good understanding of server-side templating languages Good understanding of front-end technologies, such as HTML5, and CSS3 Expertise with Linux-based systems Proficient understanding of code versioning tools, such as Git. Have worked in any of the cloud-based platforms AWS, GCP, Docker, and Kubernetes. Some Important Traits – We look out for a Person in this role Independent, resourceful, analytical, and able to solve problems effectively Ability to be flexible, agile, and thrive in chaos Excellent oral and written communication skills Our Core Value Cultures: Merchant 1st Talent Innovation The pace of our growth is incredible – if you want to tackle hard and interesting problems at scale, and create an impact within an entrepreneurial environment, Come join us!
Posted 4 hours ago
25.0 years
3 - 7 Lacs
Chennai
Remote
The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Summary: What you need to know about the role: The role is for an experienced software engineer working on building futuristic and delightful payment platform features and experiences for PayPal's diverse customer base across the globe. The role will involve working with business teams, product teams, architects and engineers located across our global offices. We are looking for a Java Developer with experience in building high-performing, scalable, enterprise-grade applications. Meet Our Team: You’ll work alongside the best and the brightest engineering talent in the industry. We have opportunities in a wide range of areas including development, design, search, platform, test, quality, big data, front end and back end. As a core participant of your team, you’ll estimate engineering efforts, design your changes, implement and test your changes, push to live, and triage production issues. You need to be dynamic, collaborative, and curious as we build new experiences, improve existing products, and develop distributed systems powering the world’s largest e-commerce and payments websites at a scale only a few companies can match. Job Description: Your way to impact: Looking for technically strong candidate, with hands on development experience in JAVA, who is self-motivated, team player, with curiosity to learn and grow, exhibiting strong resolve and challenge the status quo, to strive for excellence in everything they do, with good professional and academic track record to be part of a team that deals with most innovative payments processing team that deals transactions at scale. Your day to day: Designing and developing high-volume, low-latency applications for mission-critical systems and delivering high-availability and performance Contributing in all phases of the software development lifecycle Writing well designed, testable, efficient code Interact with other teams across the system software stack to clearly communicate and express your requirements and design What you need to bring: 2+ years of experience developing software, primarily with Java/J2EE technologies Deep understanding of and extensive experience applying advanced object-oriented design and development principles Perform high-quality software analysis, design, development, testing and maintenance for the PayPal core technology platform Solid CS fundamentals is a must. (Data structures, Algorithms, OS, Databases). Technologically savvy with excellent communication and technical skills. Knowledge on RESTful APIs & Fundamentals of Web. Very strong analytical and problem solving skills Can-do attitude while solving difficult problems spanning across multiple teams Focus on automation, and test-driven development Ability to work with teams in remote locations to complete assigned tasks. Ability to create test plans, test cases and track resolution of software defects. Able to evangelize innovations, through prototyping or other means and typically finds solutions to the most challenging framework level bugs and issues. Passionate about developing frameworks, reusable code, wants to make a difference. Self-motivated and fully committed dynamic individual with ability to adapt and adjust to fast-paced environment Experience with Source-code control systems. Experience developing data-driven applications using an industry standard RDBMS (Oracle, MySQL, etc.) and NoSQL databases, including strong data architecture and SQL development skills Experience with unit testing Experience using industry standard software development tools (IDE / Version Control / Continuous Integration) Experience with Agile Software Development Experience with load and performance testing best practices. Experience working in Agile/Scrum environments. Experience in Unix based development environment (Unix, Shell, Perl, Python) is a plus Expertise in online payments and related domains is a plus **We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don't hesitate to apply. Preferred Qualification: Subsidiary: PayPal Travel Percent: 0 For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com . Who We Are: Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at talentaccommodations@paypal.com . Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply.
Posted 4 hours ago
5.0 years
2 - 8 Lacs
Noida
On-site
Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Position Overview : We are looking for a Lead Software Engineer to help raise the engineering bar for the entire technology stack at Level AI, including applications, platform and infrastructure. They will actively collaborate with team members and the wider Level AI engineering community to develop highly scalable and performant systems. They will be a technical thought leader who will help drive solving complex problems of today and the future by designing and building simple and elegant technical solutions. They will coach and mentor junior engineers and drive engineering best practices. They will actively collaborate with product managers and other stakeholders both inside and outside the team. Competencies : Large Scale Distributed systems, Search (such as Elasticsearch), High scale messaging systems (such as Kafka), Real time Job Queues, High throughput and Low latency systems, Python, Django, Relational databases (such as PostgreSQL), data modeling, DB query Optimization, Caching, Redis, Celery, CI/CD, GCP, Kubernetes, Docker. Responsibilities - Develop and execute the technical roadmap to scale Level AI’s technology stack. Design and build highly scalable and low latency distributed systems to process large scale real time data. Drive best-in class engineering practices through the software development lifecycle. Drive operational excellence for critical services that need to have high uptime. Collaborate with a variety of stakeholders within and outside engineering to create technical plans to deliver on important business goals, and lead the execution of those. Stay up to date with the latest technologies and thoughtfully apply them to Level AI’s tech stacks. Requirement - Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1/2 Engineering institutes with relevant work experience with a top technology company. 5+ years of Backend and Infrastructure Experience with strong track record in development, architecture and design Hands on experience with large scale databases, high scale messaging systems and real time Job Queues. Experience navigating and understanding large scale systems and complex code-bases, and architectural patterns. Experience mentoring and providing technical leadership to other engineers in the team. Nice to have: Experience with Google Cloud, Django, Postgres, Celery, Redis. Some experience with AI Infrastructure and Operations. Compensation : We offer market-leading compensation, based on the skills and aptitude of the candidate. To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Our AI platform : https://www.youtube.com/watch?v=g06q2V_kb-s
Posted 4 hours ago
6.0 years
5 - 8 Lacs
Noida
On-site
At Cotality, we are driven by a single mission—to make the property industry faster, smarter, and more people-centric. Cotality is the trusted source for property intelligence, with unmatched precision, depth, breadth, and insights across the entire ecosystem. Our talented team of 5,000 employees globally uses our network, scale, connectivity and technology to drive the largest asset class in the world. Join us as we work toward our vision of fueling a thriving global property ecosystem and a more resilient society. Cotality is committed to cultivating a diverse and inclusive work culture that inspires innovation and bold thinking; it's a place where you can collaborate, feel valued, develop skills and directly impact the real estate economy. We know our people are our greatest asset. At Cotality, you can be yourself, lift people up and make an impact. By putting clients first and continuously innovating, we're working together to set the pace for unlocking new possibilities that better serve the property industry. Job Description: In India, we operate as Next Gear India Private Limited, a fully-owned subsidiary of Cotality with offices in Kolkata, West Bengal, and Noida, Uttar Pradesh. Next Gear India Private Limited plays a vital role in Cotality's Product Development capabilities, focusing on creating and delivering innovative solutions for the Property & Casualty (P&C) Insurance and Property Restoration industries. While Next Gear India Private Limited operates under its own registered name in India, we are seamlessly integrated into the Cotality family, sharing the same commitment to innovation, quality, and client success. When you join Next Gear India Private Limited, you become part of the global Cotality team. Together, we shape the future of property insights and analytics, contributing to a smarter and more resilient property ecosystem through cutting-edge technology and insights. We are seeking a talented Senior DevOps Engineer with a strong background in Azure Cloud and Data Engineering to join our team. In this role, you will work closely with our data engineering and IT teams to build and optimize cloud infrastructure, automate deployment processes, and ensure efficient data pipeline operations. Your expertise in Azure DevOps , cloud infrastructure, and data engineering will be critical to the success of our cloud-based data initiatives. Key Responsibilities: Azure Cloud Infrastructure: Design, build, and maintain cloud infrastructure on Microsoft Azure to support data engineering workloads. Manage and scale Azure resources including Azure Data Factory , Azure Logic Apps, Azure Windows Functions Implement and maintain pipelines using Azure DevOps for data workflows and infrastructure automation. Data Engineering Collaboration: Collaborate with data engineers to streamline and optimize ETL/ELT processes. Support and maintain data pipelines for structured and unstructured data using tools like Azure Data Factory and Matillion . Ensure smooth integration of data sources and outputs to data warehouses, such as Snowflake . Automation & Monitoring: Automate cloud infrastructure management and data processing workflows using Infrastructure as Code (IaC) tools like Terraform or Azure Resource Manager (ARM) templates. Implement monitoring, logging, and alerting mechanisms for cloud resources and data pipelines using tools like Azure Monitor and Log Analytics . Security & Compliance: Ensure security best practices are followed in the cloud environment, including network security groups (NSGs) , Azure RBAC , encryption , and compliance with data privacy regulations (GDPR, HIPAA, etc.). Implement backup, disaster recovery, and business continuity plans for data systems hosted in Azure. Optimization & Performance Tuning: Optimize Azure services for cost-efficiency and performance, including tuning databases, storage, and compute resources. Analyze and enhance the performance of data pipelines, ensuring low-latency and high-throughput for data processing. Collaboration & Communication: Work closely with cross-functional teams, including Data Engineering, Analytics, and IT, to ensure seamless operations across cloud and data infrastructure. Document processes, best practices, and troubleshooting steps to ensure knowledge sharing and operational efficiency. Job Qualifications: Qualifications: Education & Experience: Bachelor’s degree in Computer Science, Information Technology, or related field. 6+ years of experience in a DevOps role, with a focus on Azure Cloud and data engineering . Proven experience with Azure DevOps , Azure Data Factory , GIT, Snowflake Technical Skills: Strong experience with Azure Cloud Services (e.g., VMs, Storage, Networking, Databases). Hands-on expertise in CI/CD pipelines using Azure DevOps or similar tools. Knowledge of data engineering concepts and tools (e.g., ETL/ELT pipelines, data integration). Experience with Infrastructure as Code (IaC) tools like Terraform , ARM Templates , or Bicep . Proficiency in scripting and automation languages (e.g., PowerShell, Python, Bash). Experience with containerization and orchestration (e.g., Docker, Kubernetes, AKS). Strong knowledge of SQL and experience with NoSQL databases. Soft Skills: Strong problem-solving abilities with a focus on automation and optimization. Excellent communication and collaboration skills, with a team-oriented mindset. Ability to work independently in a fast-paced environment with changing priorities. Nice to Have: Azure certifications (e.g., Azure Administrator , Azure Solutions Architect , Azure DevOps Engineer ). . If you are passionate about cloud technologies, automation, and data engineering, and thrive in a fast-paced environment, we’d love to hear from you! Apply now to be part of a team that is pushing the boundaries of cloud and data innovation. Thrive with Cotality At Cotality, we’re committed to supporting your whole self- at work and beyond. Our India benefits package is thoughtfully designed to promote your well-being, financial security, and professional growth. From comprehensive health coverage to flexible leave, retirement planning, and mental health support, we help you thrive every step of the way. Highlights include: Health & Wellness: Company-paid Mediclaim Insurance, routine dental and vision care (including LASIK and cataract), annual health check-ups, and maternity benefits. Mental Health: Access to 12 free sessions with top therapists and coaches for you and your dependents via Lyra. Leave & Time Off: 11 paid holidays (state-specific), 10 well-being half days, paid sick, maternity, paternity, caregiving, bereavement, and volunteer time off. Family Support: Coverage available for spouse, children, and parents or in-laws; includes maternity and parental leave. Financial Benefits: ₹10,400 annual well-being account ₹15,000 medical reimbursement allowance ₹19,200 conveyance allowance House Rent Allowance with tax benefits Insurance & Protection: Group Term Life and Personal Accident Insurance at 5x annual salary (company-paid) Retirement & Savings: Provident Fund with employer matching Optional National Pension Scheme (NPS) contributions (pre-tax) Extras: Performance bonuses, recognition rewards, and exclusive employee discounts. Cotality's Diversity Commitment: Cotality is fully committed to employing a diverse workforce and creating an inclusive work environment that embraces everyone’s unique contributions, experiences and values. We offer an empowered work environment that encourages creativity, initiative and professional growth and provides a competitive salary and benefits package. We are better together when we support and recognize our differences. Equal Opportunity Employer Statement: Cotality is an Equal Opportunity employer committed to attracting and retaining the best-qualified people available, without regard to race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, record of offences, age, marital status, family status or disability. Cotality maintains a Drug-Free Workplace. Please apply on our website for consideration. Privacy Policy Global Applicant Privacy Policy By providing your telephone number, you agree to receive automated (SMS) text messages at that number from Cotality regarding all matters related to your application and, if you are hired, your employment and company business. Message & data rates may apply. You can opt out at any time by responding STOP or UNSUBSCRIBING and will automatically be opted out company-wide. Connect with us on social media! Click on the quicklinks below to find out more about our company and associates
Posted 4 hours ago
1.0 years
0 Lacs
Noida
On-site
Lead Assistant Manager EXL/LAM/1402761 Digital SolutionsNoida Posted On 26 Jun 2025 End Date 10 Aug 2025 Required Experience 1 - 4 Years Basic Section Number Of Positions 4 Band B2 Band Name Lead Assistant Manager Cost Code D012603 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 250000.0000 - 1050000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group EXL Digital Sub Group Digital Solutions Organization Digital Solutions LOB Product Practice Market SBU GenAI CoE Country India City Noida Center Noida - Centre 59 Skills Skill SQL PYTHON DOCUMENT PREPARATION STAKEHOLDER COMMUNICATION Minimum Qualification B.TECH/B.E Certification No data available Job Description Key Responsibilities: Monitor and maintain health of production AI models (GenAI and traditional ML). Troubleshoot data/model/infra issues across model pipelines, APIs, embeddings, and prompt systems. Collaborate with Engineering and Data Science teams to deploy new versions and manage rollback if needed. Implement automated logging, alerting, and retraining pipelines. Handle prompt performance drift, input/output anomalies, latency issues, and quality regressions. Analyze feedback and real-world performance to propose model or prompt enhancements. Conduct A/B testing, manage baseline versioning and monitor model outputs over time. Document runbooks, RCA reports, model lineage and operational dashboards. Support GenAI adoption by assisting in evaluations, hallucination detection, and prompt optimization. Must-have Skills: 1+ year of experience in Data Science, ML, or MLOps. Good grasp of ML lifecycle, model versioning, and basic monitoring principles. Strong Python skills with exposure to ML frameworks (scikit-learn, pandas, etc.). Basic familiarity with LLMs and interest in GenAI (OpenAI, Claude, etc.). Exposure to AWS/GCP/Azure or any MLOps tooling. Comfortable reading logs, parsing metrics, and triaging issues across the stack. Eagerness to work in a production support environment with proactive ownership. Nice-to-Have Skills: Prompt engineering knowledge (system prompts, temperature, tokens, etc.). Hands-on with vector stores, embedding models, or LangChain/LlamaIndex. Experience with tools like MLflow, Prometheus, Grafana, Datadog, or equivalent. Basic understanding of retrieval pipelines or RAG architectures. Familiarity with CI/CD and containerization (Docker, GitHub Actions). Ideal Candidate Profile: A strong starter who wants to go beyond notebooks and see AI in action. Obsessed with observability, explainability, and zero-downtime AI. Wants to build a foundation in GenAI while leveraging their traditional ML skills. A great communicator who enjoys cross-functional collaboration. Workflow Workflow Type Digital Solution Center
Posted 4 hours ago
5.0 years
5 - 9 Lacs
Noida
On-site
Posted On: 26 Jun 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description 1. Python with minimum of 5+ years of experience. 2. Equities Algorithmic trading stack, transaction data and FIX protocol. 3. market, transaction and reference data for various products like Equities, FX, Futures, FX, Indices, options etc. 4. Different market data vendors API and experience working with their data. 5. High capacity/low latency infrastructure like solace. 6. Software development, build and deployment management using tools like JIRA, Git, Jenkins, Geneous (ITRS) and scripting languages. 7. pre- and post- trade analytics like Transaction Cost analysis (TCA) reports. 8 Database design and architecture using KDB+/q Mandatory Competencies Python - Python Database - KDB Beh - Communication DevOps - Jenkins DevOps - Git Python - Rest API Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.
Posted 4 hours ago
0 years
0 Lacs
India
Remote
Hii We are hiring for contract job Location: Remote Budget: 120k Experience:6 yrs : Data Engineer (Graph RAG, Vector RAG, GenAI) 6+ yrs An offshore AI engineer who will design and build and Retrieval Augmented Generation (RAG) and Graph RAG. Ability to implement a vector RAG pipeline, plus a pilot knowledge graph (Neo4j/Tiger Graph/Neptune) proving Graph RAG viability. Implement GenAI architecture (vector+graph retrieval, ranking, grounding), fine tune LLMs, and ensure security/compliance. Experience in AI / Machine Learning / Data Science, LLM powered apps; proven RAG and Graph RAG delivery; Python, graph-DB query languages, and MLOps. Address troubleshooting and performance issues like hallucination, latency, and drift. 11:00 am to 8 pm (Candidate has to be flexible, may require overlap EST as per project need) An offshore AI engineer who will design and build and Retrieval Augmented Generation (RAG) and Graph RAG. Ability to implement a vector RAG pipeline, plus a pilot knowledge graph (Neo4j/Tiger Graph/Neptune) proving Graph RAG viability. Implement GenAI architecture (vector+graph retrieval, ranking, grounding), fine tune LLMs, and ensure security/compliance. Experience in AI / Machine Learning / Data Science, LLM powered apps; proven RAG and Graph RAG delivery; Python, graph-DB query languages, and MLOps. Address troubleshooting and performance issues like hallucination, latency, and drift.
Posted 4 hours ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Who We Are As a Company GoKwik was founded in 2020 with one simple mission, to democratise the shopping experience and increase the GMV realisation for e-commerce businesses. The company is backed by Sequoia Capital India, Matrix Partners India, RTP Global, and marquee angels. GoKwik is an e-commerce enablement company focussed predominantly on solving crucial e-commerce issues like boosting conversion rates across the e-commerce funnel and reducing RTO for our partners. It leverages AI/ML technologies to solve hard-hitting problems like RTO to increase CoD conversion rates. GoKwik's 1-click Kwik checkout improves checkout conversion rates ensuring higher GMV realisation and reduced marketing CAC. What we do is very unique with zero immediate competition in India today and therefore, building a team of Real Rockstars in their field to fuel this profit making start-up, which is growing at a rapid pace Summary As a SDE, you will interact with the entire engineering team to solve problems and build, extend, optimise and refactor the architecture of our growing product. This will involve working on our existing codebase along with new projects. GoKwik has a highly passionate engineering-driven environment where your contribution will be felt immediately. You’ll Spend Time On The Following Writing testable and efficient code Design and implementation of low-latency, high-availability, and performant applications implementing business logic and developing APIs and services Build reusable code and libraries for future use. We’re Excited About You If You Have 2+ years’ experience in software development. Hands-on experience in front-end development with React. Knowledge of frontend frameworks Good understanding of server-side templating languages Good understanding of front-end technologies, such as HTML5, and CSS3 Expertise with Linux-based systems Proficient understanding of code versioning tools, such as Git. Have worked in any of the cloud-based platforms AWS, GCP, Docker, and Kubernetes. Some Important Traits – We look out for a Person in this role Independent, resourceful, analytical, and able to solve problems effectively Ability to be flexible, agile, and thrive in chaos Excellent oral and written communication skills Our Core Value Cultures Merchant 1st Talent Innovation The pace of our growth is incredible – if you want to tackle hard and interesting problems at scale, and create an impact within an entrepreneurial environment, Come join us!
Posted 4 hours ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary The SQL Database Administrator is responsible for the design, implementation, and support of database systems for applications across MSSQL Database platform (MSQL 2019,2022 server) Administrator is a part of the Database end to end delivery team working and collaborating with Application Development, Infrastructure Engineering and Operation Support teams to deliver and support secured, high performing and optimized database solutions. Database Administrator specializes in the SQL database platform. Job Responsibilities Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex SQL & Sybase databases. Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and compliance. Identifies and resolves problem utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of database management; Consults and advises application development teams on database security, query optimization and performance. Writes scripts for automating DBA routine tasks and documents database maintenance processing flows per standards. Implement industry best practices while performing database administration task Work in Agile model with the understanding of Agile concepts Collaborate with development teams to provide and implement new features. Able to debug production issues by analyzing the logs directly and using tools like Splunk. Begin tackling organizational impediments Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. Experience (In Years) 14+ Years Total IT experience & 10+ Years relevant experience in SQL Server + Sybase Database Technical Skills Database Management: Extensive Experience in managing and administering SQL Server, Azure SQL Server, and Sybase databases, ensuring high availability and optimal performance. Data Infrastructure & Security: Expertise in designing and implementing robust data infrastructure solutions, with a strong focus on data security and compliance. Backup & Recovery: expert in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Performance Tuning & Optimization: Adept at performance tuning and optimization of databases, leveraging advanced techniques to enhance system efficiency and reduce latency. Cloud Computing & Scripting: Experienced in cloud computing environments and proficient in operating system scripting, enabling seamless integration and automation of database operations. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Strong database analytical skills to improve application performance. Expert in working Knowledge of database performance Tuning, Backup & Recovery, Infrastructure as a Code and Observability tools (Elastic). Automation tools and programming such as Ansible ,Python and Power Shell. Strong knowledge in ITSM process and tools (ServiceNow). Strong knowledge in AgriSafe Methodios . Ability to work 24*7 rotational shift to support the Database and Splunk platforms Other Critical Requirements Excellent Analytical and Problem-Solving skills Experience managing geographically distributed and culturally diverse workgroups with strong team management, leadership and coaching skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Prior experience in handling state side and offshore stakeholders Experience in creating and delivering Business presentations. Demonstrate ability to work independently and in a team environment About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 4 hours ago
0 years
0 Lacs
India
On-site
We’re building a SaaS product that helps small businesses in India (restaurants, salons, clinics, construction firms etc) handle customer calls using AI-powered voice bots. You’ll lead the tech side by integrating existing platforms (not building from scratch). Your Role • Set up voice call systems using Twilio, Exotel or Knowlarity • Integrate Speech-to-Text (Google STT, Deepgram) and Text-to-Speech (ElevenLabs, Google TTS) • Power conversations using GPT-4, Dialogflow or similar • Build/manage backend (FastAPI or Node.js) and link to CRM (Airtable, Supabase) • Create a client dashboard using Retool, Glide or custom frontend • Automate tasks using Zapier/Make • Optimise latency, monitor performance and ensure reliability • Lead tech growth and own the product roadmap You Should Have • Strong API integration and backend experience • Hands-on with Twilio, GPT APIs, Dialogflow, voice tools • Familiarity with low-code tools and basic DevOps • Product mindset and ability to ship fast • Bonus: multilingual bot experience, prior SaaS/SMB tech work Why Join • Founding role in a fast-growing, real-world AI product • Own the tech stack, shape the future, grow with the company • Solve real problems for Indian businesses using cutting-edge AI 📩 Apply via LinkedIn.
Posted 5 hours ago
3.0 years
0 Lacs
India
Remote
Job description 🚀 Full Stack AI Engineer (1–3 YOE) @ OnCourse Learning Location: Bangalore (Remote-Friendly) Experience: 1–3 Years (strictly enforced) Type: Full-time 🌍 About Us OnCourse is an early-stage learning-tech startup reimagining how students prepare for high-stakes exams like NEET PG, INI-CET, and USMLE. We’re building a smart learning engine powered by AI and cognitive science to personalize, accelerate, and make exam prep radically more effective. We're not building “just another edtech app.” We're building the future of learning. 🧠 Who We’re Looking For We want razor-sharp minds with 1–3 years of experience in software development who are: Early in their career, hungry to learn, build, and ship—not people with years of legacy to unlearn. Hands-on with LLMs and excited about using AI/ML to solve real-world problems. Sharp system thinkers, not just coders—people who obsess over data flows, latency, scalability, and user impact. Motivated builders, eager to take full ownership of features from design to delivery. High-agency teammates, who don’t wait to be told what to do, but ask: “What’s the highest-leverage thing I can build next?” If you're someone who thrives in ambiguity, takes feedback like a pro, and treats the codebase like a craft—this is your tribe. 🛠️ What You’ll Work On Design, build, and scale backend systems (Node.js, Express) and API infrastructure. Develop high-performance frontends and mobile apps (React, React Native, Expo). Integrate LLMs into user-facing experiences that make learning feel magical. Collaborate with product, design, and content teams to deliver delightful and effective features. Optimize for scale, performance, and simplicity—no bloat, just impact. 🧰 Stack & Tools You should already be comfortable with (or have used in production): Languages & Frameworks: Node.js, Express, React, React Native Databases: PostgreSQL or MongoDB Infra: Supabase, AWS/GCP AI Stack: OpenAI, Claude, LangChain or similar Collab & Dev: Cursor.sh, GitHub, Linear, Slack, Coda ❌ Who This Role Is NOT For We want to be super clear—this role is not for: 🚫 People without any hands-on experience with LLMs in a shipped or test project 🚫 Candidates unfamiliar with any of the tools or frameworks listed above 🚫 Engineers with < 1 and > 3 YOE 🚫 Anyone who needs rigid roles, top-down instruction, or isn’t proactive 🚫 Engineers who are looking to coast or just “do their part” — we build like owners 🎁 Why Join Us Real ownership: You’ll shape the direction of a product used by thousands of learners. Move fast, ship weekly: Your code will be in users’ hands constantly. Culture of writing and thinking clearly — we value smart ideas and execution over hierarchy. Flexible hours & remote-only: We trust you to deliver. Backed by real users, real metrics, and a long-term mission. ✨ Apply if… You want to build world-class software, influence product strategy, and work with a team that punches way above its weight.
Posted 5 hours ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Work experience: 2+ years Job location: Hyderabad, Telangana (Balanagar area) Job type: On-site, permanent, full-time Work schedule: Alternate Saturdays off & Sundays off About Vir Innovations Pvt. Ltd. We are a product development company headquartered in Hyderabad, specializing in defence technology and the Internet of Things (IoT). Our primary focus is on developing advanced systems in the fields of computer vision, robotics, and fire control systems. Our objective is to create products that are innovative and can match up in the new era of accelerated technological progress. Our vision is to create a better everyday life for many people, by delivering smarter alternatives to existing devices in sectors ranging from defence to consumer electronics. Why Join Us? At Vir Innovations, we are a team of driven pioneers working together to create meaningful impact. Our diverse professionals, brought together from different cultures and backgrounds across the country, share a common goal: to innovate and contribute to national security through cutting-edge technology. Our core values: Passion, Integrity, Innovation, Quality, and Growth form the foundation of our vibrant culture. These pillars guide us as we build groundbreaking solutions that inspire creativity and foster collaboration. When one joins Vir Innovations, they step into an environment that empowers to: Learn and Grow: Gain new skills and knowledge while working on projects that challenge and inspire. Innovate and Lead: Contribute to cutting-edge advancements in robotics, automation, and defence technologies. Make an Impact: Be part of something bigger your work here directly or indirectly strengthens national security. We believe in providing equal opportunities to talented individuals from all backgrounds and experiences. If someone is passionate about technology, innovation, and making a difference, Vir Innovations is the place for him. Shape the future of technology with us. Join our team and embark on a career that truly matters! Job Summary We are seeking a skilled and performance-driven Network Engineer to design and maintain real-time, Unix/TCP/UDP based communication systems between Ground Control Stations (GCS) and robotic platforms such as UGVs. This role demands a deep understanding of low-latency networking, real-time data transmission (including high-throughput camera feeds), and robust system integration across embedded Linux environments. As part of our engineering team, you will be responsible for implementing custom network protocols, optimizing transmission over constrained or lossy networks, and ensuring reliable, high-speed communication for mission-critical robotic operations. The ideal candidate is hands-on with C/C++ and Python socket programming, understands the nuances of UDP communication, and has a proven track record in debugging, performance tuning, and network reliability strategies. Key Responsibilities Develop and maintain UDP-based communication systems between the Ground Control Station (GCS) and robotic platforms, supporting both command/control and high-throughput data like camera feeds. Implement custom packet structures and ensure efficient serialization/deserialization for real-time data. Ensure low-latency, high-reliability transmission over potentially lossy or constrained networks. Analyze network paths to identify bottlenecks or failure points using tools like Wireshark or tcpdump. Write robust code in both Python and C/C++, depending on performance or system integration needs. Implement optional enhancements like heartbeat mechanisms, packet loss detection, or buffer control to improve communication reliability. Write production-grade code in C/C++ and Python for networking and integration with robotics systems. Must have Skills Strong experience in UDP socket programming in C/C++ and Python on Unix-based systems. In-depth knowledge of network fundamentals, including MTU, jitter, packet fragmentation, and latency management. Familiarity with multithreading or asynchronous event-driven architectures (select, poll, async IO). Proficient with network profiling/debugging tools and performance optimization techniques. Preferred Skills Real-time video streaming experience over UDP (custom RTP protocols, MJPEG/H.264 over UDP). Exposure to robotic control systems or embedded Linux platforms. Protocol design experience (sequence numbers, ACK/NACK, packet prioritization). Experience with low-latency systems with strict performance requirements. Desired candidate profile Education Background: Bachelor’s degree in Computer Science or related field Hands-on Experience: Minimum of 2 years working on network communication systems using C/C++ and Python in Unix/Linux environments. Deep Understanding of UDP Protocols: Proven ability to work with unreliable datagram protocols and implement reliability mechanisms manually. System-Level Thinking: Ability to analyze and optimize end-to-end communication pipelines, identify latency/failure points, and propose practical solutions. Tool-Savvy: Comfortable using tools like Wireshark, tcpdump, and logging frameworks for diagnostics and performance tuning. Adaptable Developer: Capable of switching between high-performance C/C++ and rapid-prototyping in Python based on system needs. Real-Time Mindset: Prior experience or a strong inclination to work on time-sensitive systems with strict performance constraints. Collaborative Yet Independent: Able to own complex tasks while coordinating with cross-functional teams (robotics, vision, control systems). Bonus: Exposure to video streaming protocols, robotic control systems (especially UGVs), or embedded Linux development. Note: To apply for this opportunity, you may also email your resume and cover letter to hr@virinnovations.in with the email subject as " Application for Network Engineer Role "
Posted 5 hours ago
1.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job description: Job Description Role Purpose The purpose of this role is to perform the development of VLSI system by defining the various functionalities, architecture, layout and implementation for a client ͏ Do 1. Conduct verification of the module/ IP functionality and provide customer support a. Understand the architecture of the module or the IP and create verification environment and the development plan as per Universal Verification Methodology b. Create test bench development and test case coding of the one or multiple module c. Write the codes or check the code as required d. Execute the test cases and debug the test cases if required e. Conduct functional coverage analysis and document the test cases including failures and debugging procedures on SharePoint/ JIRA or any other platform as directed f. Test the entire IP functionality under regression testing and complete the documentation to publish to client g. Troubleshoot, debug and upgrade existing systems on time & with minimum latency and maximum efficiency h. Write scripts for the IP i. Comply with project plans and industry standards 2. Ensure reporting & documentation for the client a. Ensure weekly, monthly status reports for the clients as per requirements b. Maintain documents and create a repository of all design changes, recommendations etc c. Maintain time-sheets for the clients d. Providing written knowledge transfer/ history of the project ͏ Deliver No. Performance Parameter Measure 1. Verification Timeliness Quality of Code/ Number of defects Customer responsiveness Project documentation (scripts, test cases etc)2. Self-development Skill test for next level clearance on Trend Nxt ͏ ͏ Mandatory Skills: VLSI Physical Place and Route . Experience: 1-3 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 7 hours ago
3.0 years
0 Lacs
India
Remote
MetAntz is recruiting a Senior Data Engineer for a leading AI SaaS firm for the role below. It is a remote position. About the Company Leading AI-powered platform serving hundreds of enterprise clients High-growth SaaS company Processes petabytes of data daily to deliver predictive insights and conversational intelligence Trusted by Fortune 500 companies and multi-billion dollar software organizations Position Overview We are seeking a skilled Data Engineer to join our analytics team and help scale our data infrastructure that handles petabytes of information daily. You'll play a crucial role in building faster, more reliable tools and platforms that enable low-latency, horizontally scalable data solutions for our growing user base. Key Responsibilities Data Pipeline Development : Design and develop real-time events pipelines for data ingestion supporting real-time dashboarding and analytics Platform Innovation : Brainstorm and create new platforms that deliver data to users in various formats with low latency and horizontal scalability Data Transformation : Develop complex and efficient functions to transform raw data sources into powerful, reliable components of our data lake System Optimization : Make changes across the entire technical stack and diagnose problems to ensure optimal performance Technology Implementation : Design & implement new components using various emerging technologies in the Hadoop ecosystem Project Execution : Successfully execute various data engineering projects from conception to completion Required Qualifications Experience : 3+ years of strong hands-on experience with Spark (preferably PySpark) Programming Skills : Proficiency in Python and scripting languages (Python, Bash, etc.) Database Knowledge : Good exposure to SQL, MongoDB, and other database technologies Cloud Technologies : Experience with AWS and cloud platforms such as S3 Real-time Systems : Experience in building real-time data pipelines and scalable systems Industry Background : SaaS or high-growth startup experience preferred Availability : Immediate joiners preferred Technical Environment Work with petabyte-scale data processing systems Hadoop ecosystem and emerging big data technologies Cloud-native architecture with AWS services Real-time streaming and batch processing pipelines Modern data lake architecture and analytics platforms What We Offer Opportunity to work with cutting-edge AI and data science technologies High-impact role in a rapidly scaling revenue intelligence platform Collaborative environment with experienced data engineering teams Competitive compensation and equity package Flexible work arrangements and comprehensive benefits Ideal Candidate Profile Thrives in fast-paced, high-growth startup environments Passionate about building scalable data infrastructure Experience with large-scale data processing and real-time analytics Strong problem-solving skills and ability to work across the full technical stack Excited about contributing to AI-powered solutions that drive revenue growth This role offers the opportunity to build foundational data infrastructure that directly powers AI insights for enterprise revenue teams worldwide.
Posted 7 hours ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Experience: 6+ Years Location: Chennai Availability: Immediate Joiners Preferred Job Type: Full-Time Key Skills : 5+ years of experience in software development with Python(Primary skill) and Go(Secondary skill). Solid knowledge of RESTful APIs, microservices architecture, and message Requirement : Coordinating with development teams to find out the needs of the application. Using the Python programming language to create scalable code. Application testing and bug fixing. Creating the back-end elements. Utilising server-side logic to incorporate user-facing components. Evaluating and ranking customer feature requests. Integrating storage methods for data. Design and implementation of high-performance, low-latency applications. Working in concert with front-end programmers. Upgrading the functionality of current databases. Creating digital technologies to track online activity.
Posted 8 hours ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
The best quants don’t react to markets—they anticipate them. If that’s you, let’s talk. We at iRage are an agile, young team of exceptional programmers, quants, and traders. We are looking for the smartest quants to outsmart markets and solve some of the most challenging engineering problems in the fast-paced algorithmic trading industry. Role Summary – Senior Trader At iRage, you’ll work with a small, elite team of quants and engineers to: Mine terabytes of market data for microstructural patterns. Design, backtest, and deploy strategies that exploit inefficiencies. Optimize execution, reduce latency, and squeeze out every basis point. What you’ll do: Turn raw market data into alpha-generating signals. Build models that predict price movements at sub-millisecond timescales. Push trading infrastructure to its physical limits (FPGAs, kernel bypass, ultra-low-latency C++). Continuously hunt for new strategies while refining existing ones. You’re a fit if you: Know markets deeply – You understand order books, liquidity dynamics, and how real-world trading actually works. Code fluently – Python/R for research, C++ for speed (or willingness to learn). Have a competitive streak – You want to win, and you’re obsessed with measurable results. Bonus points for: Prior HFT experience (even if just personal projects). Experience with hardware acceleration (GPUs, FPGAs). A GitHub full of math/quant projects. Why You’ll Love Working Here: We’re a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity. Here’s what we offer: 💰 Competitive salary – Get paid what you’re worth. 🌴 Generous paid time off – Recharge and come back sharper. 🌍 Work with the best – Collaborate with top-tier global talent. ✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings. 🎯 Performance rewards – Multiple bonuses for those who go above and beyond. 🏥 Health covered – Comprehensive insurance so you’re always protected. ⚡ Fun, not just work – On-site sports, games, and a lively workspace. 🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers. 📚 Annual Education Allowance – Take any external course, or certification that makes you better at your craft. 🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best. 🚚 Relocation support – Smooth move? We’ve got your back. 🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting. Follow Us: Website → https://irage.in/ Linkedin → https://www.linkedin.com/company/irage/ P.S. If you’ve ever coded a trading algo for fun, tell us about it.
Posted 9 hours ago
0.0 - 4.0 years
13 - 15 Lacs
Mumbai, Maharashtra
On-site
Java Developer Experience - 4 years NOTE : Joining time : Immediate Work Location : Mumbai Office, Vikhroli Senior Software Engineer - Backend Mumbai, Maharashtra, India | Product Engineering | Full-time Looking for a technically savvy and experienced senior engineer to lead development efforts. You will help the team grow in size and skills, optimizing their code while working on your own. You will design, and develop the products that exceed client expectations in terms of value and benefit. You will collaborate on scalability issues involving access to massive amounts of data and information. You have hands-on experience in building secure, high-performing and scalable systems in Java. Your design intuition inclines towards usability, elegance and simplicity. You are biased towards open-source tools and existing frameworks. You have successfully shipped applications with beautiful front-end and intelligent backend. You have demonstrated strong interpersonal and communication skills. Responsibilities Co-lead end-to-end design and development efforts for high-volume, low-latency applications for real-time event based systems Lead and direct software engineers, define software architecture, collaborate with other teams to explore existing systems, determine areas of complexity and potential risks to successful implementation of platform modules and features Deliver high-availability and high-performance platform using micro-services architecture Identify bottlenecks and inefficiencies in code, and devise solutions to production and non-production issues Keep an eye out for open source projects and technology trends that can be introduced in the products Transform business functionalities and production specifications into technical requirements and low level design Be hands-on, adopt a practical approach to software and technology Work with product management and other teams to build highly scalable products Requirements Bachelor’s degree in Computer Science, Information Technology or a related field 4 to 6 years of design and development experience in scalable and distributed enterprise applications using Java/J2EE, JPA, Spring boot framework and RESTful web services Experience in managing, implementing and optimizing big data stores such as MySQL, MongoDB, Elasticsearch, Redis, DynamoDB Experience in deploying applications and using various managed services provided by cloud platforms (AWS, Azure, GCP) Experience in front-end technologies HTML5, CSS3, React.js, Angular and D3.js is preferred Strong foundation in computer science, with strong competencies in data structures, algorithms and software design Proven ability to drive large scale projects with deep understanding Job Types: Full-time, Permanent Pay: ₹1,300,000.00 - ₹1,500,000.00 per year Benefits: Flexible schedule Health insurance Internet reimbursement Life insurance Paid sick time Paid time off Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Morning shift Experience: Java: 4 years (Preferred) Work Location: In person Speak with the employer +91 9266893095
Posted 9 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for latency professionals in India is rapidly growing as industries continue to rely on real-time data processing and low latency systems. Latency jobs involve optimizing systems to reduce response time and improve overall performance, making them crucial for various sectors such as finance, e-commerce, and telecommunications.
Entry-level professionals in latency roles can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 20 lakhs per annum.
In the field of latency, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually moving on to roles like Architect or Chief Technology Officer.
In addition to expertise in latency optimization, professionals in this field are often expected to have strong skills in programming languages like Java, C++, or Python, as well as knowledge of networking protocols and systems architecture.
As you venture into the field of latency jobs in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and determination, you can excel in this dynamic and rewarding career path. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane