Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 1.0 years
2 - 3 Lacs
Gurgaon
On-site
Work Experience :0-1 Responsibilities Integration of user-facing elements developed by front-end developers with server side logic. Writing reusable, testable, and efficient code. Integration of data storage solutions using MongoDB & MYSQL. Design and implementation of low-latency, high-availability, and performant applications. Analyze requests for enhancements/changes and write amendment/program specifications. Understand the inter-dependencies of the services (application, system and database) and able to pin-point problem areas accurately to improve overall efficiency. Skills : Strong proficiency with JavaScript. In-depth knowledge of Node.js and Express.js frameworks. Familiarity with front-end technologies (e.g., HTML, CSS, JavaScript) and how they integrate with backend systems. Good working Knowledge of pixel-perfect conversion of PSD into HTML document with the Responsive view. Good to have experience in one of the frameworks such as React.Js Solid knowledge of databases and database design (e.g., SQL and NoSQL databases). Must have knowledge of Data Structure and Dynamic programming Design and implementation of low-latency, high-availability, and performance applications. Experience in developing and consuming Restful APIs Ability to write clean, maintainable, and reusable code. Strong problem-solving and debugging skills. Understanding of version control systems like Git, SVN is a plus. Good communication & analytical skills Candidate can work in a team as well as individual. Experience : 0 - 1 year Experience : 0 - 1 year Skills : Node.js, Express.js , JavaScript, HTML, CSS, React.js, Angular, SQL and NoSQL databases, RESTful APIs,
Posted 2 days ago
6.0 - 8.0 years
0 Lacs
Delhi
On-site
Full time | Work From Office This Position is Currently Open Department / Category: ADMIN Listed on Jun 26, 2025 Work Location: NEW DELHI KOCHI BANGALORE Job Descritpion of Oracle DDA Admin 6 to 8 Years Relevant Experience We are seeking a highly skilled Oracle DDA (Database Design and Administration) Admin with 6–8 years of experience in Oracle RAC environments. The ideal candidate will have a strong foundation in Oracle database technologies, replication tools, performance tuning, and infrastructure maintenance. The role demands a proactive professional who can support performance testing cycles, implement change requests, and ensure stable and optimized database environments. Key Responsibilities: Monitor and manage DBA activities during performance testing phases. Analyze and interpret performance data, including AWR reports, query plans, and system metrics (CPU, latency, response time, etc.). Execute and track change requests from performance/testing teams. Ensure smooth database operations and uptime during scheduled test cycles. Collaborate with performance leads to maintain test continuity across shifts. Perform Oracle infrastructure maintenance: installations, upgrades, patching, and health checks. Support backup, restore, and recovery activities. Assist Senior Performance Engineers in optimizing test frameworks and database environments. Ensure database security, integrity, and compliance with internal standards. Technical Skills Required: Strong DBA experience with Oracle RAC (11g & 12c). Expertise in Oracle 11g/12c, PL/SQL, and core Oracle database administration. Hands-on experience with data replication technologies, including: Oracle GoldenGate Streams Veridata GGMon (preferred) Familiarity with MariaDB, MySQL, or MongoDB is an added advantage. Strong grasp of performance metrics: CPU utilization, response time, network latency, etc. Skilled in query optimization and reading AWR/query plans. Basic Unix shell scripting, particularly in AIX environments. Soft Skills & Other Requirements: Excellent communication skills for interfacing with customers and internal teams. Strong analytical and troubleshooting abilities. Ability to work collaboratively in a cross-functional and shift-based team. Detail-oriented with a continuous improvement mindset. Required Skills for Oracle DDA Admin Job Oracle DBA Golden Gate MariaDB or MySQL or MongoDB AIX Our Hiring Process Screening (HR Round) Technical Round 1 Technical Round 2 Final HR Round
Posted 2 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At Prodigal, we’re reshaping the future of consumer finance. Founded in 2018 by IITB alumni, our journey began with one bold mission: to eradicate the inefficiencies and confusion that have plagued the lending and collections industry for decades. Today, we stand at the forefront of a seismic shift in the industry, pioneering the concept of consumer finance intelligence. Powered by our cutting-edge platform, Prodigal’s Intelligence Engine, we’re creating the next-generation agentic workforce for consumer finance—one that empowers companies to achieve unprecedented levels of operational excellence. With over half a billion consumer finance interactions processed and a growing impact on more than 100 leading companies across North America, we’ve established ourselves as the go-to partner for organizations that demand more from their AI solutions. Our unparalleled experience, coupled with our trusted customer relationships, uniquely positions us to build generative AI solutions that will revolutionize the future of consumer finance. At Prodigal, we are driven by a singular, unrelenting purpose: to transform how consumer finance companies engage with their customers and, in turn, drive successful outcomes for all. About The Role We're seeking an exceptional Agent Engineer specializing in Applied AI and Prompt Engineering to join our team building next-generation industry centric vertical Voice AI Agents. In this role, you'll be at the intersection of AI engineering and customer experience, crafting and optimizing prompts that power natural, effective voice conversations in consumer finance. You'll work directly with our proprietary voice AI technology, analyzing conversation data, iterating on prompt strategies, and implementing systematic approaches to improve agent performance. This is a hands-on technical role that requires both engineering excellence and a deep understanding of conversational AI design. 🏆 Key Responsibilities Prompt Engineering & Optimization Design, test, and refine prompts for voice AI agents handling complex financial conversations Develop systematic prompt engineering methodologies and best practices for voice interactions Ability to meta-prompt and run experiments with different LLMs, bringing in m Bring in new agent architectures to bring in the lowest latencies possible while maintaining high accuracy Create prompt templates and frameworks that scale across different use cases and customer segments Implement A/B testing strategies to measure and improve prompt effectiveness Data-Driven Optimization Analyze conversation transcripts and performance metrics to identify improvement opportunities Use our Simulation Platform and existing call corpus of our customers to tune and improve AI Agent performance Develop automated evaluation frameworks for AI agent quality assessment Create feedback loops (semi or fully automated) between production data and prompt refinements Optimize prompts for latency, accuracy, and natural conversation flow Voice AI Development Collaborate with ML engineers to improve voice recognition and synthesis quality Design conversation flows that handle edge cases and error recovery gracefully Implement context management strategies for multi-turn conversations Develop domain-specific language models for financial services Customer Success & Travel Travel to customer sites to learn from human agents and gather requirements Conduct on-site prompt optimization based on customer-specific needs to deliver rapid iterations of prototype versions ✅ Requirements Must-Have Qualifications B.E/B.Tech/M.Tech in Computer Science, AI/ML, Linguistics, or equivalent Strong Python programming skills and experience with ML frameworks Ability to work with large-scale conversation data Excellent analytical and problem-solving skills Strong verbal and written communication skills and ability to explain technical concepts clearly Willingness to travel in the US for customer engagements Technical Skills Proficiency in prompt engineering techniques (few-shot learning, chain-of-thought meta-prompting, etc.) Knowledge of SQL and data analysis tools (Pandas, NumPy) Experience with experiment tracking and MLOps tools Understanding of real-time system constraints and latency optimization Bonus Qualifications Familiarity with voice biometrics and authentication Experience with real-time streaming architectures Published research or blog posts on prompt engineering or conversational AI Experience with multilingual voice AI systems 🎁 What We Offer Job Benefits GenAI Experience – Work at the cutting edge of voice AI and prompt engineering, shaping the future of conversational AI in consumer finance World-class Team – Learn from and collaborate with experts from BCG, Deloitte, Meta, Amazon, and top institutes like IIT and IIM Continuous Education – Full sponsorship for courses, certifications, conferences, and learning materials related to AI and prompt engineering Travel Opportunities – Gain exposure to diverse customer environments and real-world AI implementations Health Insurance – Comprehensive coverage for you and your family Flexible Schedule – Work when you're most productive, with core collaboration hours Generous Leave Policy – Unlimited PTO to ensure you stay refreshed and creative Food at Office – All meals provided when working from office Recreation & Team Activities – Regular team bonding through sports, games, and social events Our Tech Stack AI/ML: GPT, Claude, Gemini, Custom LLMs Languages: Python Infrastructure: AWS (Lambda, SageMaker, EKS), LiveKit for Real-time streaming Data: MongoDB, PostgreSQL, Databricks, Redis Tools: MLflow, Custom prompt management platforms Why This Role Matters As a Founding Agent Engineer Focused On Voice AI And Prompt Engineering, You'll Directly Impact How Millions Of Consumers Interact With Financial Services Companies, And Also Be Able To Lay Out How Agent Engineering Grows At Prodigal. Your Work Will Have Immense Implications For Consumer Finance How Prodigal operates Setting the industry standard in Voice AI From day 1, Prodigal has been defined by talented, humble, and hungry leaders and we want this mindset and culture to continue to blossom from top to bottom in the company. If you have an entrepreneurial spirit and want to work in a fast-paced, intellectually-stimulating environment where you will be pushed to grow, then please reach out because we are looking to build a transformational company that reinvents one of the biggest industries in the US. To learn more about us - please visit the following: Our Story - https://www.prodigaltech.com/our-story What shapes our thinking - https://link.prodigaltech.com/our-thesis Our website - https://www.prodigaltech.com/
Posted 2 days ago
10.0 years
6 - 8 Lacs
Bengaluru
On-site
Company Description Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description Ready to make a global impact by industrializing AI? Visa AI as Services (AIaS) operationalizes the delivery of AI and decision intelligence to ensure their ongoing business values. Built with composable AI capabilities, privacy-enhancing computation, and cloud native platforms, AIaS powers and automates industrialization of data, models, and applications for predictive and generative AI. Combined with strong governance, AIaS optimizes the performance, scalability, interpretability and reliability of AI models and services. If you want to be in the exciting payment and AI space, learn fast, and make big impacts, Visa AI as Services is an ideal place for you! This role is for a Lead ML Engineer – Visa Feature Platform, with a strong development background, whose primary objective will be to extend our AI as a Service platform to provide faster time to market and building sophisticated Feature Engineering for training & inference while enhancing, optimizing our existing codebase and development procedures, as well as developing new solutions. We are seeking for a strong tech leader and architect with a solid background in data engineering and AI/ML production systems. The ideal candidate will have a strong mix of hands-on technical knowledge and leadership skills, with the ability to inspire and drive the team towards achieving our technical objectives. They should be hands-on, knowledgeable about cloud technologies, business drivers, and emerging AI/ML trends, and experienced with Big Data & Streaming platforms. The role demands a proactive leader who can guide the team through uncertainty, educate stakeholders, and influence decisions. The candidate should possess strong interpersonal skills, excellent written and verbal communication, and the ability to manage complex projects and deadlines. A problem-solving mindset and a hands-on approach are key to succeeding in this role. This role offers ample opportunities for learning and growth, and the chance to be part of delivering the next big thing for our AI as Services team. If you are experienced and passionate about cloud technology, AI, and machine learning, and are excited about making a significant impact, we would love to hear from you. Essential Functions: Collaborate with project team members (Product Managers, Architects, Analysts, Software Engineers, Project Managers, etc.) to ensure development and implementation of new data driven business solutions. Drive development effort End-to-End for on-time delivery of high-quality solutions that conform to requirements, conform to the architectural vision, and comply with all applicable standards. Responsibilities span all phases of solution development. Collaborate with senior technical staff and PM to identify, document, plan contingency, track and manage risks and issues until all are resolved Present technical solutions, capabilities, considerations, and features in business terms. Effectively communicate status, issues, and risks in a precise and timely manner. Coaching and mentoring junior team members and evolving team talent pipeline. This is a hybrid position. Expectation of days in the office will be confirmed by your Hiring Manager. Qualifications Basic Qualifications: 12 or more years of relevant work experience with a bachelor's degree or at least 10 years of experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or 8 years of work experience with a PhD 4+ years of related hands-on experience in delivering robust and scalable solutions on Big Data applications Experience in at least one or two of the following: Rust, Python, Golang, Java, or C/C++. Experience with Rust, Flink, Spark, NoSQL or Kafka highly preferred. Web service standards and related patterns (REST, gRPC). Experience implementing solutions for low-latency, distributed services using open standard technologies. e.g. Streaming Systems, NoSQL and Containers. Exposure to leading-edge areas such as Machine Learning, Big Data, Distributed Systems or SRE. Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law.
Posted 2 days ago
0 years
0 Lacs
India
On-site
Ø Take full ownership of AI development — from research and prototyping to deployment and maintenance of language model-based applications. Ø Design and build intelligent AI agents using LLMs (e.g., GPT, Claude, LLaMA) that can perform reasoning, automation, and tool-based tasks. Ø Architect and implement systems for prompt engineering, context management, and memory in AI agents. Ø Select, fine-tune, or self-host language models to meet specific business needs, balancing cost, performance, and scalability. Ø Build integrations with internal tools, APIs, and databases to enable agents to act autonomously in real-world use cases. Ø Establish best practices for AI safety, testing, evaluation, and user experience, ensuring reliability and ethical outputs. Ø Work closely with leadership to identify high-impact AI opportunities across the business and define the AI roadmap. Ø Set up and manage infrastructure for running models (cloud, on-prem, or hybrid), including inference optimization and latency reduction. Ø Stay current with developments in generative AI, multi-agent systems, and autonomous AI, bringing innovation to the organization. Ø Document processes and build internal tools to scale future AI initiatives and potentially onboard additional team members. Job Types: Full-time, Part-time, Fresher, Freelance, Volunteer Pay: ₹180,000.00 - ₹1,150,329.84 per year Work Location: In person
Posted 2 days ago
5.0 years
3 - 7 Lacs
Ahmedabad
On-site
About the Role: Grade Level (for internal use): 10 The Team: The Capital IQ Solutions Data Science team supports the S&P Capital IQ Pro platform with innovative Data Science and Machine Learning solutions, utilizing the most advanced NLP Generative AI models. This role presents a unique opportunity for hands-on ML/NLP/Gen AI/LLM scientists and engineers to advance to the next step in their career journey and apply their technical expertise in NLP, deep learning, Gen AI, and LLMs to drive business value for multiple stakeholders while conducting cutting-edge applied research in LLMs, Gen AI, and related areas. Responsibilities and Impact: Design solutions utilizing NLP models including chat assistants and RAG systems. Design and develop custom NLP LLM Models including both prompt engineering techniques and model fine-tunning and alignment (SFT, RLHF, DPO) NLP Model evaluation using both human-supported and synthetic evaluation methods and metrics. Deploy NLP models ensuring latency, reliability, and scalability. Discover new methods for prompt engineering, model fine-tuning, quantization and latency optimization, document embeddings and chunking. Collaborate closely with product teams, business stakeholders, and engineers to ensure smooth integration of NLP models into production systems. Troubleshoot complex issues related to machine learning model development and data pipelines and develop innovative solutions. Actively research, explore and identify the latest relevant methods and technologies What We’re Looking For : Basic Required Qualifications : Degree in Computer Science, Mathematics or Statistics, Computational linguistics, Engineering, or a related field. Good understanding of machine learning and deep learning methods and their mathematical foundations 5-8 years of professional experience in Advanced Analytics / Data Science / Machine Learning 5-8 years hands-on experience developing NLP models, ideally with transformer architectures. Demonstrated experience with Python, PyTorch, Hugging Face or similar tools. Mastery of Python and ability to write robust and high standard, testable code Knowledge of developing or tuning LLMS Additional Preferred Qualifications : 3+ years of experience with implementing information retrieval systems. Experience with contributing to Open Source initiatives or in research projects and/or participation in Kaggle competitions. Publications related to Machine Learning or Deep Learning Ability to work in a team Able to report progress and summarize issues to a less technical audience Curious and open-minded attitude to new approaches About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317453 Posted On: 2025-06-30 Location: Ahmedabad, Gujarat, India
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Experience in LLM models like PaLM, GPT4, Mistral (open-source models), Work through the complete lifecycle of Gen AI model development, from training and testing to deployment and performance monitoring. Developing and maintaining AI pipelines with multimodalities like text, image, audio etc. Have implemented in real-world Chat bots or conversational agents at scale handling different data sources. Experience in developing Image generation/translation tools using any of the latent diffusion models like stable diffusion, Instruct pix2pix. Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of cxzsetup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push, raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Qualifications Bachelor’s or Master’s Degree in Computer Science, Engineering, Have undergone any modern NLP/LLM courses or participation in open competitions are added advantage.
Posted 2 days ago
6.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Data Scientist Location: Bangalore Business & Team: BB Advanced Analytics and Artificial Intelligence COE Impact & contribution: As a Senior Data Scientist, you will be instrumental in pioneering Gen AI and multi-agentic systems at scale within CommBank. You will architect, build, and operationalize advanced generative AI solutions—leveraging large language models (LLMs), collaborative agentic frameworks, and state-of-the-art toolchains. You will drive innovation, helping set the organizational strategy for advanced AI, multi-agent collaboration, and responsible next-gen model deployment. Roles & Responsibilities: Gen AI Solution Development: Lead end-to-end development, fine-tuning, and evaluation of state-of-the-art LLMs and multi-modal generative models (e.g., transformers, GANs, VAEs, Diffusion Models) tailored for financial domains. Multi-Agentic System Engineering: Architect, implement, and optimize multi-agent systems, enabling swarms of AI agents (utilizing frameworks like Lang chain, Lang graph, and MCP) to dynamically collaborate, chain, reason, critique, and autonomously execute tasks. LLM-Backed Application Design: Develop robust, scalable GenAI-powered APIs and agent workflows using Fast API, Semantic Kernel, and orchestration tools. Integrate observability and evaluation using Lang fuse for tracing, analytics, and prompt/response feedback loops. Guardrails & Responsible AI: Employ frameworks like Guardrails AI to enforce robust safety, compliance, and reliability in LLM deployments. Establish programmatic checks for prompt injections, hallucinations, and output boundaries. Enterprise-Grade Deployment: Productionize and manage at-scale Gen AI and agent systems with cloud infrastructure (GCP/AWS/Azure), utilizing model optimization (quantization, pruning, knowledge distillation) for latency/throughput trade offs. Toolchain Innovation: Leverage and contribute to open source projects in the Gen AI ecosystem (e.g., Lang Chain, Lang Graph, Semantic Kernel, Lang fuse, Hugging face, Fast API). Continuously experiment with emerging frameworks and research. Stakeholder Collaboration: Partner with product, engineering, and business teams to define high-impact use cases for Gen AI and agentic automation; communicate actionable technical strategies and drive proof-of-value experiments into production. Mentorship & Thought Leadership: Guide junior team members in best practices for Gen AI, prompt engineering, agentic orchestration, responsible deployment, and continuous learning. Represent CommBank in the broader AI community through papers, patents, talks, and open-source. Essential Skills: 6+ years of hands-on experience in Machine Learning, Deep Learning, or Generative AI domains, including practical expertise with LLMs, multi-agent frameworks, and prompt engineering. Proficient in building and scaling multi-agent AI systems using Lang Chain, Lang Graph, Semantic Kernel, MCP, or similar agentic orchestration tools. Advanced experience developing and deploying Gen AI APIs using Fast API; operational familiarity with Lang fuse for LLM evaluation, tracing, and error analytics. Demonstrated ability to apply Guardrails to enforce model safety, explainability, and compliance in production environments. Experience with transformer architectures (BERT/GPT, etc.), fine-tuning LLMs, and model optimization (distillation/quantization/pruning). Strong software engineering background (Python), with experience in enterprise-grade codebases and cloud-native AI deployments. Experience integrating open and commercial LLM APIs and building retrieval-augmented generation (RAG) pipelines. Exposure to agent-based reinforcement learning, agent simulation, and swarm-based collaborative AI. Familiarity with robust experimentation using tools like Lang Smith, GitHub Copilot, and experiment tracking systems. Proven track record of driving Gen AI innovation and adoption in cross-functional teams. Papers, patents, or open-source contributions to the Gen AI/LLM/Agentic AI ecosystem. Experience with financial services or regulated industries for secure and responsible deployment of AI. Education Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 01/07/2025
Posted 2 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role As a Senior Engineering Manager at IOL, you will lead a high-performing engineering team responsible for building and scaling our B2B hospitality marketplace, which processes billions of searches, price verifications, and bookings daily. This role combines technical leadership with hands-on contributions, including code reviews and building proofs of concept (PoCs). You will mentor engineers, drive hiring initiatives, and shape the technical vision for our platform, ensuring robust, scalable, and innovative solutions. With a focus on technologies like Golang, Python, and Elasticsearch, you will guide the team in optimizing performance and delivering seamless experiences for our demand and supply partners. Key Responsibilities • Technical Leadership : Provide architectural guidance and perform code reviews to ensure high-quality, maintainable codebases in Golang and Python. Build PoCs to validate technical approaches and explore innovative solutions. • Team Management : Hire, mentor, and grow a diverse team of engineers, fostering a culture of collaboration, innovation, and continuous improvement. • Project Oversight : Collaborate with cross-functional teams (e.g., Data Team, platform engineers, product managers) to define project requirements, set technical priorities, and deliver scalable solutions on time. • System Optimization : Oversee the development and optimization of high-throughput systems, leveraging Elasticsearch for search and analytics, and ensuring low-latency performance for massive workloads (e.g., 2 billion daily searches). • Process Improvement : Implement and refine engineering processes, including CI/CD pipelines, agile methodologies, and best practices for code quality and system reliability. • Strategic Planning : Align team objectives with company goals, contributing to the technical roadmap for IOL’s hospitality marketplace. • Innovation : Stay current with industry trends in distributed systems, cloud platforms, and search technologies, proposing novel approaches to enhance system capabilities. • Stakeholder Communication : Present technical strategies and project updates to stakeholders, translating complex concepts into clear, actionable insights. Required Skills & Qualifications • Education : Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. • Experience : o 10+ years of software engineering experience, with at least 3 years in a technical leadership or engineering management role. o Proven track record of hiring, mentoring, and scaling high-performing engineering teams. o Extensive hands-on experience with Golang and Python in production environments. o Strong background in performing code reviews and building PoCs to drive technical decisions. • Technical Skills : o Deep expertise in Golang and Python for building scalable, high-performance systems. o Proficiency with Elasticsearch for search, indexing, and analytics in large- scale datasets. o Familiarity with distributed systems and big data technologies (e.g., Apache Spark, Kafka, Redis). o Experience with cloud platforms (e.g., AWS, Azure, GCP) for deployment and scaling. o Knowledge of version control systems (e.g., Git) and CI/CD pipelines (e.g., Azure DevOps). • Leadership Skills : o Exceptional ability to mentor engineers, resolve conflicts, and foster professional growth. o Strong problem-solving skills to address complex technical and team challenges. o Excellent communication skills to collaborate with cross-functional teams and present to stakeholders. • Work Style : Proactive, adaptable, and able to thrive in a fast-paced, innovative environment. Preferred Skills • Experience in the hospitality or travel industry, particularly with search or booking systems. • Familiarity with machine learning frameworks (e.g., TensorFlow, PyTorch) or predictive modeling. • Knowledge of real-time data streaming and event-driven architectures (e.g., Apache Kafka). • Exposure to Azure cloud services (e.g., Azure App Service, Azure SQL Database, KeyVault). • Experience optimizing systems for resource-constrained or low-latency environments.
Posted 2 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Position Summary The SQL Database Administrator is responsible for the design, implementation, and support of database systems for applications across the Enterprise . Database Administrator is a part of the Database end to end delivery team working and collaborating with Application Development, Infrastructure Engineering and Operation Support teams to deliver and support secured, high performing and optimized database solutions. Database Administrator specializes in the SQL database platform. Job Responsibilities Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex SQL & Sybase databases. Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and compliance. Identifies and resolves problem utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of database management; Consults and advises application development teams on database security, query optimization and performance. Writes scripts for automating DBA routine tasks and documents database maintenance processing flows per standards. Implement industry best practices while performing database administration task Work in Agile model with the understanding of Agile concepts Collaborate with development teams to provide and implement new features. Able to debug production issues by analyzing the logs directly and using tools like Splunk. Begin tackling organizational impediments Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 3+ years of IT and Infrastructure engineering work experience. Experience (In Years) 3+ Years Total IT experience & 2+ Years relevant experience in SQL Server + Sybase Database Technical Skills Database Management: Should have Basic knowledge in managing and administering SQL Server, Azure SQL Server, and Sybase databases, ensuring high availability and optimal performance. Data Infrastructure & Security: Basic knowledge in designing and implementing robust data infrastructure solutions, with a strong focus on data security and compliance. Backup & Recovery: Skilled in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Performance Tuning & Optimization: Adept at performance tuning and optimization of databases, leveraging advanced techniques to enhance system efficiency and reduce latency. Cloud Computing & Scripting: Basic knowledge in cloud computing environments and proficient in operating system scripting, enabling seamless integration and automation of database operations. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints . Basic knowledge analytical skills to improve application performance. Basic knowledge of database performance Tuning, Backup & Recovery, Infrastructure as a Code and Observability tools (Elastic). Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database platforms Other Critical Requirements Automation tools and programming such as Ansible and Python are preferrable Excellent Analytical and Problem-Solving skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Demonstrate ability to work independently and in a team environment About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 2 days ago
2.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
Remote
📍 Location: Remote / Gurugram 🕒 Experience: 2-4 years 🏢 Company: EnDecarb.ai – India’s first AI-powered Decarbonization Platform 🌱 Sector: ClimateTech / Sustainability / SaaS About EnDecarb.ai EnDecarb.ai is building India’s first AI-powered real-time decarbonization platform to help manufacturers, suppliers, and financiers to monitor, report, and reduce their emissions (Scope 1/2/3) with precision. What You'll Do Build and maintain scalable .NET Core backend APIs for real-time data ingestion and ESG analytics. Design and query PostgreSQL and time-series databases (TimescaleDB/InfluxDB) for high-volume telemetry workloads. Develop microservices using clean architecture principles and support containerized deployments via Docker. Implement data validation, buffering, and streaming mechanisms for reliable, low-latency performance. Collaborate with frontend, AI, and infrastructure teams to build an end-to-end decarbonization pipeline. Tech Stack You'll Work With • Languages: C#, .NET 8 (ASP.NET Core Web API) • Databases: PostgreSQL (with TimescaleDB), InfluxDB, Redis • Messaging/Streaming: MQTT, SignalR, WebSockets • Tools: Docker, Git, REST APIs, Swagger • Bonus: Knowledge of Prometheus, Grafana, or OpenTelemetry Requirements Strong foundation in C# and .NET Core Good grasp of SQL and PostgreSQL database design Understanding of asynchronous programming, REST API design, and microservice architecture Willingness to work with real-time data, sensor integration, and low-latency systems Eagerness to learn fast and work in a cross-functional, mission-driven team Bonus Skills (Not Mandatory but a Plus) Experience with time-series databases (TimescaleDB, InfluxDB) Familiarity with MQTT or Modbus/OPC-UA Experience deploying on AWS, using CI/CD pipelines Why Join Us? Be a founding team member solving India’s biggest climate and energy challenge. Work at the intersection of climate tech, real-time systems, and AI. Build something with purpose — your code will help reduce emissions in real factories. Salary as per Industry standard
Posted 2 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description TrueFan uses proprietary AI technology to connect fans and celebrities and is now focused on revolutionizing customer-business interactions with AI-powered personalized video solutions. Our platform enables brands to create unique, engaging video experiences that drive customer loyalty and deeper connections. Job Description DevOps with MLOps Engineer Company Overview We are a cutting-edge AI company focused on developing advanced lip-syncing technology using deep neural networks. Our solutions enable seamless synchronisation of speech with facial movements in videos, creating hyper-realistic content for various industries such as entertainment, marketing, and more. Position: MLOps Engineer We are looking for a talented and motivated MLOps Engineer to join our team. The ideal candidate will play a crucial role in managing and scaling our machine learning models and infrastructure, enabling seamless deployment and automation of our lip-sync video generation systems. Key Responsibilities Model Training/Deployment Pipelines and Monitoring: Design, implement, and maintain scalable and automated pipelines for deploying deep neural network models. Monitor and manage Production models, ensuring high availability, low latency, and smooth performance. Automate workflows for data preprocessing (face alignment, feature extraction, audio analysis), model retraining, and video generation. Implement Logging, Tracking, and Monitoring Systems to ensure data integrity and visibility into the model lifecycle. Infrastructure Management: Build and manage cloud-based infrastructure (AWS, GCP, or Azure) for efficient model training, deployment, and data storage. Collaborate with DevOps to manage containerization (Docker, Kubernetes) and ensure robust CI/CD pipelines using github and jenkins for model delivery. Monitor resource for GPU/ CPU-intensive tasks like video processing, model inference, and training using Prometheus , Grafana, alert manager, ELK stack. Collaboration: Work closely with ML engineers to integrate models into production pipelines. Provide tools and frameworks for rapid experimentation and model versioning. Required Skills Basic Python Strong experience with cloud platforms (AWS, GCP, Azure) and cloud-based machine learning services. Expert knowledge of containerization technologies (Docker, Kubernetes) and infrastructure-as-code (Terraform, CloudFormation) Have understanding of Deployment of both synchronous and asynchronous API using Flask, Django, Celery, Redis, RabbitMQ , Kafka Deployed and Scaled AI/ML in Production. Familiarity with deep learning frameworks (TensorFlow, PyTorch). Familiarity with video processing tools like FFMPEG and Dlib for handling dynamic frame data. Basic understanding of ML models Preferred Qualifications Experience in image and video-based deep learning tasks. Familiarity with media streaming and video processing pipelines for real-time generation. Experience with real-time inference and deploying models in latency-sensitive environments. Strong problem-solving skills with a focus on optimising machine learning model infrastructure for scalability and performance.
Posted 2 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsibilities: To work as a backend engineer with our Core Shopping Experience engineering team To build highly available and scalable system solutions that power our apps and interfaces regardless of whether they are on the browser, on Android/iOS A smart, dynamic individuals with outstanding programming skills and a great passion for developing beautiful, innovative software Building high-level architecture of several systems and contributes to the overall success of the product by driving technology and best practices in engineering Responsible for building a highly scalable, extensible, reliable, and available platform Driving optimizations to scale out the platform in order to support an exponentially growing number of transactions Translating complex business use cases and wireframes into high-quality scalable code Must help in establishing industry best practices in architecture, design, and SDLC practices and drive their adaption Qualifications & Experience: At least 3 years of experience in designing, developing, testing, and deploying large scale applications in any language or stack. Ability to design and implement low latency RESTful services. Strong in PHP, Java script, Node, and Python- (Proficient in anyone) Ability to design and implement low latency RESTful services. Conceptually familiar with a cloud-tech stack that includes AWS, Real-time message brokers, SQL, NoSQL, Continuous Delivery, Configuration Management. Good knowledge, understanding and experience of working with a large variety of multi-tier architecture
Posted 2 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About The Role Grade Level (for internal use): 10 The Team The Capital IQ Solutions Data Science team supports the S&P Capital IQ Pro platform with innovative Data Science and Machine Learning solutions, utilizing the most advanced NLP Generative AI models. This role presents a unique opportunity for hands-on ML/NLP/Gen AI/LLM scientists and engineers to advance to the next step in their career journey and apply their technical expertise in NLP, deep learning, Gen AI, and LLMs to drive business value for multiple stakeholders while conducting cutting-edge applied research in LLMs, Gen AI, and related areas. Responsibilities And Impact Design solutions utilizing NLP models including chat assistants and RAG systems. Design and develop custom NLP LLM Models including both prompt engineering techniques and model fine-tunning and alignment (SFT, RLHF, DPO) NLP Model evaluation using both human-supported and synthetic evaluation methods and metrics. Deploy NLP models ensuring latency, reliability, and scalability. Discover new methods for prompt engineering, model fine-tuning, quantization and latency optimization, document embeddings and chunking. Collaborate closely with product teams, business stakeholders, and engineers to ensure smooth integration of NLP models into production systems. Troubleshoot complex issues related to machine learning model development and data pipelines and develop innovative solutions. Actively research, explore and identify the latest relevant methods and technologies What We’re Looking For Basic Required Qualifications : Degree in Computer Science, Mathematics or Statistics, Computational linguistics, Engineering, or a related field. Good understanding of machine learning and deep learning methods and their mathematical foundations 5-8 years of professional experience in Advanced Analytics / Data Science / Machine Learning 5-8 years hands-on experience developing NLP models, ideally with transformer architectures. Demonstrated experience with Python, PyTorch, Hugging Face or similar tools. Mastery of Python and ability to write robust and high standard, testable code Knowledge of developing or tuning LLMS Additional Preferred Qualifications 3+ years of experience with implementing information retrieval systems. Experience with contributing to Open Source initiatives or in research projects and/or participation in Kaggle competitions. Publications related to Machine Learning or Deep Learning Ability to work in a team Able to report progress and summarize issues to a less technical audience Curious and open-minded attitude to new approaches About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317453 Posted On: 2025-06-30 Location: Ahmedabad, Gujarat, India
Posted 2 days ago
4.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Job Title: Data Engineer – AWS Data Pipelines & ETL Location: Nagpur, Maharashtra, India (On-site) Experience: 4+ years Work Type: Direct Hire/Permanent Employment We are seeking an experienced and motivated Data Engineer to design, develop, and maintain scalable data pipelines and ETL (Extract, Transform, Load) processes in an AWS cloud environment. This role will be critical in enabling efficient data integration, transformation, and reporting across the organization, driving data-driven decision-making. Key Responsibilities: Design, Develop, and Maintain Scalable Data Pipelines: - Architect, build, and manage robust and scalable data pipelines to ingest, transform, and load data from various structured and unstructured sources into data lakes or data warehouses. - Leverage AWS-native services to build cloud-native solutions that are highly available, reliable, and cost-efficient. Collaborate with Stakeholders: - Work closely with data architects, analysts, business intelligence teams, and other stakeholders to gather data requirements, understand reporting and analytics needs, and translate them into actionable technical solutions. - Act as a technical liaison to align data engineering efforts with organizational goals. Implement Data Integration and Transformation: - Develop and automate ETL workflows using AWS Glue, Redshift, S3, Lambda, Athena, Step Functions, and other relevant AWS services. - Integrate data from disparate systems, ensuring data consistency and conformity to standards. Ensure Data Quality, Integrity, and Security: - Establish and maintain data validation, cleansing, and monitoring processes to guarantee high data quality across all stages of the data lifecycle. - Implement security best practices for data access control, encryption, and compliance with regulatory requirements. Optimize SQL and Query Performance: - Write, debug, and optimize complex SQL queries for data extraction, transformation, and loading. - Optimize queries and processing workflows for performance, scalability, and cost-efficiency on large datasets. Performance Tuning and Optimization: - Continuously monitor and improve the performance of data pipelines, addressing bottlenecks, reducing latency, and minimizing processing costs. Implement Rigorous Testing and Validation: - Establish unit tests, integration tests, and validation frameworks to ensure the accuracy, completeness, and reliability of data pipelines. - Perform root cause analysis and troubleshoot data discrepancies and failures in ETL processes. Documentation and Knowledge Sharing: - Develop and maintain clear, comprehensive documentation for data pipelines, workflows, architecture diagrams, and standard operating procedures. - Create technical guides and training materials to support cross-functional teams in utilizing data platforms. Technology Requirements: AWS Cloud Services (Required): - AWS Glue - AWS Redshift - Amazon S3 - AWS Lambda - AWS Athena - AWS Step Functions - CloudWatch (for monitoring and logging) Databases and Data Warehousing (Required): - PostgreSQL, MySQL (or other RDBMS) - Redshift Spectrum - Exposure to NoSQL systems (optional) Data Integration and Transformation Tools (Nice to have): - PySpark, Apache Spark - Pandas (Python) - SQL-based ETL solutions Programming Languages (Nice to have): - Python (preferred) - SQL - (Optional: Scala, Java for Spark-based pipelines) Workflow Orchestration (Nice to have): - Airflow, AWS Step Functions, or similar Version Control & DevOps (Nice to have): - Git - Experience with CI/CD pipelines for data workflows - Infrastructure as Code (CloudFormation, Terraform) (optional but preferred) BI and Querying Tools (Nice to have): - AWS QuickSight - Tableau, Power BI (preferred exposure) Qualifications: · Proven experience building ETL pipelines and data integration solutions on AWS. · Strong expertise in SQL, data modeling, and query optimization. · Familiarity with data security, governance, and compliance best practices. · Hands-on experience with data lake and data warehouse architectures. · Excellent problem-solving, debugging, and troubleshooting skills. · Ability to work collaboratively in a cross-functional, agile environment. · Strong communication and documentation skills.
Posted 2 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: DevOps Engineer (3-8+ Years Experience) Location: Bengaluru, India Job Type: Full-time Experience: 3-8+ years Industry: Financial Technology / Software Development About Us We are a cutting-edge software development company specializing in ultra-low latency trading applications for brokers, proprietary trading firms, and institutional investors. Our solutions are designed for high-performance, real-time trading environments, and we are looking for a DevOps Engineer to enhance our deployment pipelines, infrastructure automation, and system reliability. For more info, please visit: https://tradelab.in/ Responsibilities CI/CD & Infrastructure Automation Design, implement, and manage CI/CD pipelines for rapid and reliable software releases. Automate deployments using Terraform, Helm, and Kubernetes. Optimize build and release processes to support high-frequency, low-latency trading applications. Good knowledge on Linux/Unix Cloud & On-Prem Infrastructure Management Deploy and manage cloud-based (AWS, GCP) and on-premises infrastructure. Ensure high availability and fault tolerance of critical trading systems. Implement infrastructure as code (IaC) to standardize deployments. Performance Optimization & Monitoring Monitor system performance, network latency, and infrastructure health using tools like Prometheus, Grafana, ELK. Implement automated alerting and anomaly detection for real-time issue resolution. Security & Compliance Implement DevSecOps best practices to ensure secure deployments. Maintain compliance with financial industry regulations (SEBI). Conduct vulnerability scanning, access control, and log monitoring. Collaboration & Troubleshooting Work closely with development, QA, and trading teams to ensure smooth deployments. Troubleshoot server, network, and application issues under tight SLAs. Required Skills & Qualifications ✅ 5+ years of experience as a DevOps Engineer in a software development or trading environment. ✅ Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD). ✅ Proficiency in Cloud Platforms (AWS, GCP,) and Containerization (Docker, Kubernetes). ✅ Experience with Infrastructure as Code (IaC) using Terraform , or CloudFormation. ✅ Deep understanding of Linux system administration and networking (TCP/IP, DNS, Firewalls). ✅ Knowledge of monitoring & logging tools (Prometheus, Grafana, ELK ). ✅ Experience in scripting and automation using Python, Bash, or Go. ✅ Understanding of security best practices (IAM, firewalls, encryption). Good To Have But Not Mandatory Skills ➕ Experience with low-latency trading infrastructure and market data feeds . ➕ Knowledge of high-frequency trading (HFT) environments . ➕ Exposure to FIX protocol, FPGA, and network optimizations . ➕ Experience with Redis, Nginx for real-time data processing. Perks & Benefits ⭐ Competitive salary & performance bonuses ⭐ Opportunity to work in the high-frequency trading and fintech industry ⭐ Flexible work environment with hybrid work options ⭐ Cutting-edge tech stack and infrastructure ⭐ Health insurance & wellness programs ⭐ Continuous learning & certification support
Posted 2 days ago
2.0 years
0 Lacs
India
Remote
Senior Machine Learning Engineer (AI-Powered Software Platform for Hidden Physical-Threat Detection & Real-Time Intelligence) About the Company: Aerobotics7 (A7) is a mission-driven deep-tech startup focused on developing a UAV-based next-gen sensing and advanced AI platform to detect, identify, and mitigate hidden threats like landmines, UXOs, and IEDs in real-time. We are embarking on a rapid development phase, creating innovative solutions leveraging cutting-edge technologies. Our dynamic team is committed to building impactful products through continuous learning, and close cross-collaboration. Position Overview: We are seeking a Senior Machine Learning Engineer with a strong research orientation to join our team. This role will focus on developing and refining proprietary machine learning models for drone-based landmine detection and mitigation. The ideal candidate will design, develop, and optimize advanced ML workflows with an emphasis on rigorous research, novel model development, and experimental validation in deep learning, multi-modal/sensor fusion and computer vision applications. Key Responsibilities: Lead the end-to-end AI model development process, including research, experimentation, design, and implementation. Architect, train, and deploy deep learning models on cloud (GCP) and edge devices, ensuring real-time performance. Develop and optimize multi-modal ML/DL models integrating multiple sensor inputs. Implement and fine-tune CNNs, Vision Transformers (ViTs), and other deep-learning architectures. Design and improve sensor fusion techniques for enhanced perception and decision-making. Optimize AI inference for low-latency and high-efficiency deployment on production. Cross-collaborate with software and hardware teams to integrate AI solutions into mission-critical applications. Develop scalable pipelines for model training, validation, and continuous improvement. Ensure robustness, interpretability, and security of AI models in deployment. Required Skills: • Strong expertise in deep learning frameworks (TensorFlow, PyTorch). • Experience with CNNs, ViTs, and other DL architectures. • Hands-on experience in multi-modal ML and sensor fusion techniques. • Proficiency in cloud-based AI model deployment (GCP experience preferred). • Experience with edge AI optimization (NVIDIA Jetson, TensorRT, OpenVINO). • Strong knowledge of data preprocessing, augmentation, and synthetic data generation. • Proficiency in model quantization, pruning, and optimization for real-time applications. • Familiarity with computer vision, object detection, and real-time inference techniques. • Ability to work with limited datasets, including generating synthetic data (VAEs or s similar), data annotation and augmentation strategies. • Strong coding skills in Python and C++ with experience in high-performance computing. Preferred Qualifications: • Experience: 2-4+ Years. • Experience with MLOps, including CI/CD pipelines, model versioning, and monitoring. • Knowledge of reinforcement learning techniques. • Experience in working in fast-paced startup environments. • Prior experience working on AI-driven autonomous systems, robotics, or UAVs. • Understanding of embedded systems and hardware acceleration for AI workloads. Benefits: NOTE: THIS ROLE IS UNDER AEROBOTICS7 INVENTIONS PVT. LTD., AN INDIAN ENTITY. IT IS A REMOTE INDIA-BASED ROLE WITH COMPENSATION ALIGNED TO INDIAN MARKET STANDARDS. WHILE OUR PARENT COMPANY IS US-BASED, THIS POSITION IS FOR CANDIDATES RESIDING AND WORKING IN INDIA. Competitive startup-level salary and comprehensive benefits package. Future opportunity for equity options in the company. Opportunity to work on impactful, cutting-edge technology in a collaborative startup environment. Professional growth with extensive learning and career development opportunities. Direct contribution to tangible, real-world impact. How to Apply: Interested candidates are encouraged to submit their resume along with an (optional) cover letter highlighting their relevant experience and passion for working in a dynamic startup environment. For any questions or further information, feel free to reach out to us directly by emailing us at careers@aerobotics7.com.
Posted 2 days ago
0 years
0 Lacs
India
On-site
Design and develop high-volume, low-latency applications for mission-critical systems, ensuring top-tier availability and performance. Contribute to all phases of the product development lifecycle. Write well-designed, testable, and efficient code. Ensure designs comply with specifications. Prepare and produce releases of software components. Support continuous improvement by investigating alternate technologies and presenting these for architectural review. Please note that we have requirements for this role in Chennai, Salem, Coimbatore, Tirunelveli, and Madurai.
Posted 2 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary We are looking for experienced Data Modelers to support large-scale data engineering and analytics initiatives. The role involves developing logical and physical data models, working closely with business and engineering teams to define data requirements, and ensuring alignment with enterprise standards. • Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, Spark, Data Bricks Delta Lakehouse or other Cloud data warehousing technologies. • Governs data design/modelling – documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. • Develop a deep understanding of the business domains like Customer, Sales, Finance, Supplier, and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Drive collaborative reviews of data model design, code, data, security features to drive data product development. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; SAP Data Model. Develop reusable data models based on cloud-centric, code-first approaches to data management and data mapping. Partner with the data stewards team for data discovery and action by business customers and stakeholders. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Assist with data planning, sourcing, collection, profiling, and transformation. Support data lineage and mapping of source system data to canonical data stores. Create Source to Target Mappings (STTM) for ETL and BI developers. Skills needed: Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models, CPG / Manufacturing/Sales/Finance/Supplier/Customer domains ). Experience with at least one MPP database technology such as Databricks Lakehouse, Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Working knowledge of SAP data models, particularly in the context of HANA and S/4HANA, Retails Data like IRI, Nielsen Retail. C5i is proud to be an equal opportunity employer. We are committed to equal employment opportunity regardless of race, color, religion, sex, sexual orientation, age, marital status, disability, gender identity, etc. If you have a disability or special need that requires accommodation, please keep us informed about the same at the hiring stages for us to factor necessary accommodations.
Posted 2 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company : GSPANN is a global IT services and consultancy provider headquartered in Milpitas, California (U.S.A.). With five global delivery centers across the globe, GSPANN provides digital solutions that support the customer buying journeys of B2B and B2C brands worldwide. With a strong focus on innovation and client satisfaction, GSPANN delivers cutting-edge solutions that drive business success and operational excellence. GSPANN helps retail, finance, manufacturing, and high-technology brands deliver competitive customer experiences and increased revenues through our solution delivery, technologies, practices, and operations for each client. For more information, visit www.gspann.com Job Position (Title):AI Ops + ML Ops Engineer Experience Required:4+ Years Job Type: Fulltime Number of positions: 3 (1 Senior and 2 Junior) Location: Hyderabad/Pune/Gurugram Technical Skill Requirements Mandatory Skills - ML Ops, Devops, Python, Cloud, AI Ops Role and Responsibilities Strategic & Leadership: Architect and lead the implementation of scalable AIOps and MLOps frameworks. Mentor junior engineers and data scientists on best practices in model deployment and operational excellence. Collaborate with product managers, SREs, and business stakeholders to align technical strategies with organizational goals. Define and enforce engineering standards, SLAs, SLIs, and SLOs for ML and AIOps services. MLOps Focus: Design and manage ML CI/CD pipelines for training, testing, deployment, and monitoring using tools like Kubeflow, MLflow, or Airflow . Implement advanced monitoring for model performance (drift, latency, accuracy) and automate retraining workflows. Lead initiatives on model governance, reproducibility, traceability, and compliance (e.g., FAIR, audit logging). AIOps Focus: Develop AI/ML-based solutions for proactive infrastructure monitoring, predictive alerting, and intelligent incident management. Integrate and optimize observability tools ( Prometheus, Grafana, ELK, Dynatrace, Splunk, Datadog ) for anomaly detection and root cause analysis. Automate incident response workflows using playbooks, runbooks, and self-healing mechanisms. Use statistical methods and ML to analyze logs, metrics, and traces at scale. Required Skills 4+ years of experience in DevOps, MLOps, or AIOps, with at least 2+ years in a leadership or senior engineering role. Expert-level proficiency in Python , Bash , and familiarity with Go or Java . Deep experience with containerization (Docker) , orchestration (Kubernetes) , and cloud platforms (AWS, GCP, Azure) . Proficient with CI/CD tools and infrastructure-as-code ( Terraform, Ansible, Helm ). Strong understanding of ML lifecycle management, model monitoring, and data pipeline orchestration. Experience deploying and maintaining large-scale observability and telemetry systems. Preferred Qualifications: Experience with streaming data platforms: Kafka, Spark, Flink . Hands-on experience with Service Mesh (Istio/Linkerd) for traffic and security management. Familiarity with data security, privacy, and compliance standards (e.g., GDPR, HIPAA). Certifications in AWS/GCP DevOps, Kubernetes, or MLOps are a strong plus. Why choose GSPANN At GSPANN, we don’t just serve our clients—we co-create. The GSPANNians are passionate technologists who thrive on solving the toughest business challenges, delivering trailblazing innovations for marquee clients. This collaborative spirit fuels a culture where every individual is encouraged to sharpen their skills, feed their curiosity, and take ownership to learn, experiment, and succeed. We believe in celebrating each other’s successes—big or small—and giving back to the communities we call home. If you’re ready to push boundaries and be part of a close-knit team that’s shaping the future of tech, we invite you to carry forward the baton of innovation with us. Let’s Co-Create the Future—Together. Discover Your Inner Technologist Explore and expand the boundaries of tech innovation without the fear of failure. Accelerate Your Learning Shape your career while scripting the future of tech. Seize the ample learning opportunities to grow at a rapid pace. Feel Included At GSPANN, everyone is welcome. Age, gender, culture, and nationality do not matter here, what matters is YOU. Inspire and Be Inspired When you work with the experts, you raise your game. At GSPANN, you’re in the company of marquee clients and extremely talented colleagues. Enjoy Life We love to celebrate milestones and victories, big or small. Ever so often, we come together as one large GSPANN family. Give Back Together, we serve communities. We take steps, small and large so we can do good for the environment, weaving in sustainability and social change in our endeavors. We invite you to carry forward the baton of innovation in technology with us. Let’s Co-Create
Posted 2 days ago
8.0 years
0 Lacs
Madhya Pradesh, India
On-site
Key Responsibilities: Provide technical oversight for the selection and deployment of smart meters (single-phase, three-phase, DT, and HT meters). Ensure compliance of smart meters with applicable technical standards (IS 16444, DLMS/COSEM, BIS, CEA). Review meter specifications, type test reports, and certifications. Supervise factory inspections, sample testing, and acceptance procedures. Monitor installation and commissioning of smart meters at consumer premises and grid interfaces. Support end-to-end integration of smart meters with Head-End System (HES), Meter Data Acquisition System (MDAS), and Meter Data Management System (MDMS). Evaluate performance of meters in the field – accuracy, communication, reliability, tamper detection, and data consistency. Coordinate with vendors, field teams, and utility engineers for resolving technical issues. Validate data acquisition, latency, event logging, and billing data generation. Support in preparing Standard Operating Procedures (SOPs), FAQs, training materials, and field guides for utility staff. Qualifications & Experience: Bachelor’s degree in Electrical / Electronics Engineering. Minimum 8+ years of experience in metering, power distribution, or AMI projects. Strong knowledge of smart meter hardware and firmware, communication protocols (DLMS/COSEM), and utility infrastructure. Experience with large-scale smart meter rollouts, preferably under RDSS or similar schemes. Familiarity with meter testing standards (IS/IEC), tamper detection mechanisms, and calibration procedures. Skills & Competencies: Technical expertise in smart meters, metering communication protocols, and data analytics. Proficient in interpreting technical drawings, specifications, and test reports. Strong analytical and troubleshooting skills. Excellent communication, documentation, and coordination abilities.
Posted 2 days ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
We’re iRage — a team of quants, engineers, and traders who’ve spent the last 15 years building cutting-edge algorithmic trading systems. Known for its cutting-edge platforms and quant-backed strategies, iRage has become a go-to for traders and institutions looking for low-latency execution and smart risk management. Key wins: ✅ Pioneers in Algo trading—trusted by top proprietary firms & HFT players. ✅ Top-ranked derivatives broker—consistently high exchange rankings. ✅ Built one of India’s largest retail Algo communities. Now, we’re taking that expertise to create consumer-first trading products, and we need a Product Designer to craft intuitive, powerful experiences for traders. If you love fintech, complex workflows, and designing for real-time data, this is your role. Role Summary We are looking for a designer who combines a deep understanding of the trading industry with exceptional UI/UX design and software development acumen. As a product designer, you will be responsible for conceptualizing, designing, and prototyping innovative trading platform features that enhance user experience and drive engagement. You will bridge the gap between business requirements, user needs, and technical feasibility, working closely with developers and traders to deliver exceptional products. Responsibilities Own end-to-end design for our trading platform—from research and wireframes to high-fidelity UI and prototyping. Turn complex trading concepts (order books, charting, execution workflows) into clean, user-friendly interfaces. Work directly with traders, engineers, and quants to bridge the gap between financial logic and great UX. Design for speed and precision—every millisecond and pixel matters in trading. Test, iterate, and refine based on real user behavior (not just best practices). Push boundaries in data visualization—how do you make real-time market data feel effortless? Qualifications A product-minded designer with 3+ years of experience in UI/UX, ideally in fintech, trading, or data-heavy apps. Obsessed with details and workflows—you geek out on order types, charting tools, or execution mechanics. Fluent in Figma, prototyping, and working closely with engineers (bonus if you’ve touched React or front-end code). Comfortable in fast-moving, technical environments—you can debate APIs with devs and trading strategies with quants. Have a portfolio that shows how you’ve simplified complexity (extra points for trading/finance work). Why You’ll Love Working Here We’re a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity. Here’s what we offer: 💰 Competitive salary – Get paid what you’re worth. 🌴 Generous paid time off – Recharge and come back sharper. 🌍 Work with the best – Collaborate with top-tier global talent. ✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings. 🎯 Performance rewards – Multiple bonuses for those who go above and beyond. 🏥 Health covered – Comprehensive insurance so you’re always protected. ⚡ Fun, not just work – On-site sports, games, and a lively workspace. 🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers. 📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft. 🏋️ Stay fit – Discounted gym memberships to keep you at your best. 🚚 Relocation support – Smooth move? We’ve got your back. 🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting. We work hard, play hard, and grow together. Join us. (P.S. We hire for talent, not pedigree—but if you’ve worked at a top tech co or fintech startup, we’d love to hear how you’ve shipped great products. )
Posted 2 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree or equivalent practical experience. 8 years of experience in software development, with a focus on building distributed high throughput and latency sensitive systems. 5 years of experience testing, and launching software product. 3 years of experience in technical leadership, leading and growing software engineering teams. Preferred qualifications: Master’s degree or PhD in Engineering, Computer Science, or a related technical field. 3 years of experience in a technical leadership role leading project teams and setting technical direction. Experience with personalization and ML serving infrastructure. Experience with e-commerce search and discovery systems. About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. With your technical expertise you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. At YouTube, we believe that everyone deserves to have a voice, and that the world is a better place when we listen, share, and build community through our stories. We work together to give everyone the power to share their story, explore what they love, and connect with one another in the process. Working at the intersection of cutting-edge technology and boundless creativity, we move at the speed of culture with a shared goal to show people the world. We explore new ideas, solve real problems, and have fun — and we do it all together. Responsibilities Collaborate with Google Commerce and other YouTube teams to define the product direction, strategy, and roadmap for YouTube Shopping infrastructure and serving systems aligning with YouTube technology roadmap. Work closely with Youtube Shopping UTLs and YouTube Infra leads to build the next generation YouTube Shopping entity model and serving infrastructure. Build a breadth of understanding of all our upstream and downstream systems to influence the technical investments needed in their system to make YouTube Shopping successful. Be an active participant in other engineering team design reviews to guide the wider YouTube Shopping team in the right direction with respect to serving of YouTube Shopping Entities. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Posted 2 days ago
5.0 years
18 - 25 Lacs
Hyderabad, Telangana, India
On-site
Role: Senior .NET Engineer Experience: 5-12 Years Location: Hyderabad This is a WFO (Work from Office) role. Mandatory Skills: Dot Net Core, C#, Kafka, CI/CD pipelines, Observability tools, Orchestration tools, Cloud Microservices Interview Process First round - Online test Second round - Virtual technical discussion Manager/HR round - Virtual discussion Required Qualification Company Overview It is a globally recognized leader in the fintech industry, delivering cutting-edge trading solutions for professional traders worldwide. With over 15 years of excellence, a robust international presence, and a team of over 300+ skilled professionals, we continually push the boundaries of technology to remain at the forefront of financial innovation. Committed to fostering a collaborative and dynamic environment, our prioritizes technical excellence, innovation, and continuous growth for our team. Join our agile-based team to contribute to the development of advanced trading platforms in a rapidly evolving industry. Position Overview We are seeking a highly skilled Senior .NET Engineer to play a pivotal role in the design, development, and optimization of highly scalable and performant domain-driven microservices for our real-time trading applications. This role demands advanced expertise in multi-threaded environments, asynchronous programming, and modern software design patterns such as Clean Architecture and Vertical Slice Architecture. As part of an Agile Squad, you will collaborate with cross-functional teams to deliver robust, secure, and efficient systems, adhering to the highest standards of quality, performance, and reliability. This position is ideal for engineers who excel in building low-latency, high-concurrency systems and have a passion for advancing fintech solutions. Key Responsibilities System Design and Development Architect and develop real-time, domain-driven microservices using .NET Core to ensure scalability, modularity, and performance. Leverage multi-threaded programming techniques and asynchronous programming paradigms to build systems optimized for high-concurrency workloads. Implement event-driven architectures to enable seamless communication between distributed services, leveraging tools such as Kafka or AWS SQS. System Performance and Optimization Optimize applications for low-latency and high-throughput in trading environments, addressing challenges related to thread safety, resource contention, and parallelism. Design fault-tolerant systems capable of handling large-scale data streams and real-time events. Proactively monitor and resolve performance bottlenecks using advanced observability tools and techniques. Architectural Contributions Contribute to the design and implementation of scalable, maintainable architectures, including Clean Architecture, Vertical Slice Architecture, and CQRS. Collaborate with architects and stakeholders to align technical solutions with business requirements, particularly for trading and financial systems. Employ advanced design patterns to ensure robustness, fault isolation, and adaptability. Agile Collaboration Participate actively in Agile practices, including Scrum ceremonies such as sprint planning, daily stand-ups and retrospectives.. Collaborate with Product Owners and Scrum Masters to refine technical requirements and deliver high-quality, production-ready software. Code Quality and Testing Write maintainable, testable, and efficient code adhering to test-driven development (TDD) methodologies. Conduct detailed code reviews, ensuring adherence to best practices in software engineering, coding standards, and system architecture. Develop and maintain robust unit, integration, and performance tests to uphold system reliability and resilience. Monitoring and Observability Integrate Open Telemetry to enhance system observability, enabling distributed tracing, metrics collection, and log aggregation. Collaborate with DevOps teams to implement real-time monitoring dashboards using tools such as Prometheus, Grafana, and Elastic (Kibana). Ensure systems are fully observable, with actionable insights into performance and reliability metrics. Required Expertise- Technical Expertise And Skills 5+ years of experience in software development, with a strong focus on .NET Core and C#. Deep expertise in multi-threaded programming, asynchronous programming, and handling concurrency in distributed systems. Extensive experience in designing and implementing domain-driven microservices with advanced architectural patterns like Clean Architecture or Vertical Slice Architecture. Strong understanding of event-driven systems, with knowledge of messaging frameworks such as Kafka, AWS SQS, or RabbitMQ. Proficiency in observability tools, including Open Telemetry, Prometheus, Grafana, and Elastic (Kibana). Hands-on experience with CI/CD pipelines, containerization using Docker, and orchestration tools like Kubernetes. Expertise in Agile methodologies under Scrum practices. Solid knowledge of Git and version control best practices. Beneficial Skills Familiarity with Saga patterns for managing distributed transactions. Experience in trading or financial systems, particularly with low-latency, high-concurrency environments. Advanced database optimization skills for relational databases such as SQL Server. Certifications And Education Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Relevant certifications in software development, system architecture, or AWS technologies are advantageous. Why Join? Exceptional team building and corporate celebrations Be part of a high-growth, fast-paced fintech environment. Flexible working arrangements and supportive culture. Opportunities to lead innovation in the online trading space. Skills: observability tools,ci/cd pipelines,orchestration tools,agile methodologies,elastic (kibana),kafka,grafana,cloud microservices,git,event-driven architectures,cqrs,kubernetes,containerization using docker,test-driven development (tdd),asynchronous programming,aws sqs,.net core,open telemetry,c#,dot net core,tdd,clean architecture,docker,ci/cd pipeline,vertical slice architecture,multi-threaded programming,.net,prometheus
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Machine Learning Lead Engineer Job Summary: We are looking for a highly skilled Machine Learning Lead Engineer to lead the development and deployment of production-grade ML solutions. You will be responsible for overseeing the engineering efforts of the ML team, ensuring scalable model development, robust data pipelines, and seamless integration with production systems. This is a key technical leadership role for someone passionate about driving machine learning initiatives from design to deployment, while mentoring a team of engineers and collaborating with cross-functional stakeholders. Roles and Responsibilities: Technical Leadership & Delivery Lead the end-to-end development of ML models and pipelines—from data preparation and feature engineering to model training, validation, and deployment. Translate business requirements into scalable ML solutions, ensuring performance, maintainability, and production readiness. Supervise and support team members in their project work; ensure adherence to coding and MLOps best practices. Model Development & Optimization Design and implement machine learning models (e.g., classification, regression, NLP, recommendation) using TensorFlow, PyTorch, or Scikit-learn. Optimize models for accuracy, latency, and efficiency through feature selection, hyperparameter tuning, and evaluation metrics. Conduct model performance analysis and guide the team in continuous improvement strategies. MLOps & Productionization Implement robust ML pipelines using MLflow, Kubeflow, or similar tools for CI/CD, monitoring, and lifecycle management. Work closely with DevOps and platform teams to containerize models and deploy them on cloud platforms like AWS SageMaker, GCP Vertex AI, or Azure ML. Ensure monitoring, alerting, and retraining strategies are in place for models in production. Team Collaboration & Mentorship Guide and mentor a team of ML engineers and junior data scientists. Collaborate closely with architects, data engineers, and product teams to ensure seamless integration of ML components. Contribute to code reviews, design sessions, and knowledge-sharing initiatives. Skills and Qualifications: Must-Have Skills 5+ years of experience in ML engineering or applied data science, with a strong track record of delivering production-grade ML systems. Deep expertise in Python, data structures, algorithms, and ML frameworks (TensorFlow, PyTorch, Scikit-learn). Hands-on experience with data pipeline tools (Airflow, Spark, Kafka, Pyspark) and MLOps platforms (MLflow, Kubeflow). Strong knowledge of cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes). Solid understanding of model deployment, monitoring, and lifecycle management. Good-to-Have Skills Prior experience leading small to mid-sized technical teams. Exposure to business intelligence or data analytics. Cloud certifications (e.g., AWS Certified ML Specialty). Familiarity with agile methodologies and project tracking tools like Azure DevOps.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
22645 Jobs | Dublin
Wipro
12405 Jobs | Bengaluru
EY
8519 Jobs | London
Accenture in India
7136 Jobs | Dublin 2
Uplers
6955 Jobs | Ahmedabad
Amazon
6685 Jobs | Seattle,WA
IBM
6478 Jobs | Armonk
Oracle
6281 Jobs | Redwood City
Muthoot FinCorp (MFL)
5249 Jobs | New Delhi
Capgemini
4637 Jobs | Paris,France