Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Are you passionate about building and maintaining large-scale production systems that support advanced data science and machine learning applications? Do you want to join a team at the heart of NVIDIA's data-driven decision-making culture? If so, we have a great opportunity for you! NVIDIA is seeking a Senior Site Reliability Engineer (SRE) for the Data Science & ML Platform(s) team. The role involves designing, building, and maintaining services that enable real-time data analytics, streaming, data lakes, observability and ML/AI training and inferencing. The responsibilities include implementing software and systems engineering practices to ensure high efficiency and availability of the platform, as well as applying SRE principles to improve production systems and optimize service SLOs. Additionally, collaboration with our customers to plan implement changes to the existing system, while monitoring capacity, latency, and performance is part of the role. To succeed in this position, a strong background in SRE practices, systems, networking, coding, capacity management, cloud operations, continuous delivery and deployment, and open-source cloud enabling technologies like Kubernetes and OpenStack is required. Deep understanding of the challenges and standard methodologies of running large-scale distributed systems in production, solving complex issues, automating repetitive tasks, and proactively identifying potential outages is also necessary. Furthermore, excellent communication and collaboration skills, and a culture of diversity, intellectual curiosity, problem solving, and openness are essential. As a Senior SRE at NVIDIA, you will have the opportunity to work on innovative technologies that power the future of AI and data science, and be part of a dynamic and supportive team that values learning and growth. The role provides the autonomy to work on meaningful projects with the support and mentorship needed to succeed, and contributes to a culture of blameless postmortems, iterative improvement, and risk-taking. If you are seeking an exciting and rewarding career that makes a difference, we invite you to apply now! What You’ll Be Doing Develop software solutions to ensure reliability and operability of large-scale systems supporting machine-critical use cases. Gain a deep understanding of our system operations, scalability, interactions, and failures to identify improvement opportunities and risks. Create tools and automation to reduce operational overhead and eliminate manual tasks. Establish frameworks, processes, and standard methodologies to enhance operational maturity, team efficiency, and accelerate innovation. Define meaningful and actionable reliability metrics to track and improve system and service reliability. Oversee capacity and performance management to facilitate infrastructure scaling across public and private clouds globally. Build tools to improve our service observability for faster issue resolution. Practice sustainable incident response and blameless postmortems What We Need To See Minimum of 6+ years of experience in SRE, Cloud platforms, or DevOps with large-scale microservices in production environments. Master's or Bachelor's degree in Computer Science or Electrical Engineering or CE or equivalent experience. Strong understanding of SRE principles, including error budgets, SLOs, and SLAs. Proficiency in incident, change, and problem management processes. Skilled in problem-solving, root cause analysis, and optimization. Experience with streaming data infrastructure services, such as Kafka and Spark. Expertise in building and operating large-scale observability platforms for monitoring and logging (e.g., ELK, Prometheus). Proficiency in programming languages such as Python, Go, Perl, or Ruby. Hands-on experience with scaling distributed systems in public, private, or hybrid cloud environments. Experience in deploying, supporting, and supervising services, platforms, and application stacks. Ways To Stand Out From The Crowd Experience operating large-scale distributed systems with strong SLAs. Excellent coding skills in Python and Go and extensive experience in operating data platforms. Knowledge of CI/CD systems, such as Jenkins and GitHub Actions. Familiarity with Infrastructure as Code (IaC) methodologies and tools. Excellent interpersonal skills for identifying and communicating data-driven insights. NVIDIA leads the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing, and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions, from artificial intelligence to autonomous cars. NVIDIA is looking for exceptional people like you to help us accelerate the next wave of artificial intelligence. JR1999109
Posted 6 days ago
4.0 years
0 Lacs
India
On-site
Aviso is the AI compass that guides sales and go-to-market teams to close more deals, accelerate growth, and find their Revenue True North. Aviso AI delivers revenue intelligence, drives informed team-wide actions and course corrections, and gives precise guidance so sellers and teams don't get lost in the fog of CRM and augment themselves with predictive AI. With demonstrated results across Fortune 500 companies and industry leaders such as Dell, Splunk, Nuance, Elastic, Github, and RingCentral, Aviso works at the frontier of predictive AI to help teams close more deals and drive more revenue. Aviso AI has generated 305 billion insights, analyzed $180B in pipeline, and helped customers win $100B in deals. Companies use Aviso to drive more revenue, achieve goals faster, and win in bold, new frontiers. By using Aviso's guided-selling tools instead of conventional CRM systems, sales teams close 20% more deals with 98%+ accuracy, and reduce spending on non-core CRM licenses by 30%. Job Description: We are looking for a skilled and motivated Data Engineer to join our growing team, focused on building fast, scalable, and reliable data platforms that power insights across the organization. If you enjoy working with large-scale data systems and solving complex challenges, this is the role for you. What You’ll Do: Grow our analytics capabilities by building faster and more reliable tools to handle petabytes of data daily. Brainstorm and develop new platforms to serve data to users in all shapes and forms, with low latency and horizontal scalability. Troubleshoot and diagnose problems across the entire technical stack. Design and develop real-time event pipelines for data ingestion and real-time dashboards. Develop complex and efficient functions to transform raw data sources into powerful, reliable components of our data lake. Design and implement new components using emerging technologies in the Hadoop ecosystem, ensuring the successful execution of various projects. Skills That Will Help You Succeed in This Role: Strong hands-on experience (4+ years) with Apache Spark , preferably PySpark . Excellent programming and debugging skills in Python . Experience with scripting languages such as Python , Bash , etc. Solid experience with databases such as SQL , MongoDB , etc. Good to have experience with AWS and cloud technologies such as Amazon S3 .
Posted 6 days ago
3.0 years
0 Lacs
India
Remote
Remote Job Role : Full Stack Software Engineer with AI Location : Indian (Remote) We are seeking an innovative Full Stack Engineer to AI inclusive applications. These descriptions build upon your existing requirements, integrating AI-specific responsibilities, skills, and qualifications. Join our team, dedicated to developing cutting-edge applications leveraging Retrieval-Augmented Generation (RAG) and other AI technologies. You will build end-to-end Title: Full Stack Engineer (AI/RAG Applications) Key Responsibilities • Design, build, and maintain scalable full-stack solutions, integrating sophisticated AI models and advanced data retrieval mechanisms into intuitive, scalable, and responsive applications. Applications integrating Retrieval-Augmented Generation (RAG)-based AI solutions. • Develop responsive, intuitive user interfaces leveraging modern JavaScript frameworks (React, Angular • Design, build, and maintain AI-driven full-stack applications leveraging Retrieval-Augmented Generation (RAG, Vue) to deliver seamless AI-driven user experiences. • Build robust backend APIs and microservices that interface with AI models,) and related AI technologies vector databases, and retrieval engines. • Integrate Large Language Models (LLMs), embeddings, vector databases, and search algorithms. • Collaborate closely with AI/ML specialists, product owners, and UX designers to translate complex AI capabilities into user-friendly interfaces. • Implement robust into applications. • Collaborate closely with data scientists, machine learning engineers, and product teams to define APIs and backend services to requirements, optimize AI integration, and deliver innovative features support RAG model integrations and real-time data retrieval. • Develop responsive front-end interfaces utilizing modern frameworks. • Create and manage RESTful and GraphQL APIs to facilitate efficient, secure data exchange between frontend (React, Angular, Vue) that seamlessly interact with AI backend services. • Ensure robust security measures, scalability, and performance components, backend services, and AI engines. • Participate actively in code reviews, architecture decisions, and Agile ceremonies, ensuring best practices in software engineering and optimization of AI-integrated applications. • Participate actively in code reviews, technical design discussions, and agile ceremonies. • Continuously AI integration. • Troubleshoot, debug, and enhance performance of both frontend and backend systems, focusing on AI latency, accuracy explore emerging AI trends and technologies, proactively recommending improvements, and scalability. • to enhance product capabilities. Qualifications • Bachelor’s degree in computer science, Engineering, or a related technical discipline. • 3+ years of experience in full-stack software engineering, with demonstrated, Engineering, or related discipline. • Minimum of 3+ years of experience in full-stack software development. • Proficiency experience integrating AI/ML services. • Strong proficiency in front-end technologies including HTML, CSS, JavaScript, and frameworks such as React, Angular, or Vue.js. • in frontend technologies (HTML, CSS, JavaScript Backend development expertise with Node, React, Angular, Vue). • Strong backend.js, Python, Java, or .NET, particularly experience building RESTful APIs and microservices. skills in Node.js, Python, Java, or .NET. • Hands-on experience integrating AI/ML models, particularly NLP- Experience integrating AI and NLP models, including familiarity with Retrieval-Augmented Generation (RAG), OpenAI APIs, LangChain, or similar frameworks. • Proficiency with relational-based Large Language Models (e.g., GPT, BERT, LLaMA). • Familiarity with RAG architectures, vector databases (e.g., Pinecone, We and NoSQL databases (PostgreSQL, MongoDB, etc.) and familiarity with vector databases (aviate, Milvus), and embedding techniques. • Experience with RESTful APIs, GraphQL, microservices, and cloud-native architecture (AWS, Azure, GCP). e.g., Pinecone, Chroma, Weaviate) is a plus. • Solid understanding- Solid understanding of databases (SQL/NoSQL) and data modeling best practices. • Experience with version control systems (Git), CI/CD pipelines, and containerization (Docker of version control (Git) and CI/CD pipelines. Thanks, and Regards Saurabh Kumar | Lead Recruiter saurabh.yadav@ampstek.com | www.ampstek.com https://www.linkedin.com/in/saurabh-kumar-yadav-518927a8/ Call to : +1 609-360-2671
Posted 6 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At Dario, Every Day is a New Opportunity to Make a Difference. We are on a mission to make better health easy. Every day our employees contribute to this mission and help hundreds of thousands of people around the globe improve their health. How cool is that? We are looking for passionate, smart, and collaborative people who have a desire to do something meaningful and impactful in their career. We are looking for a talented Senior Software developer to take responsibility for DarioHealth solutions and products. As a senior Backend developer, you will Join a growing Agile team of experienced developers building production applications, backend services, data solutions and platform infrastructure. Responsibilities Development high scale cloud-based solutions in Health area Development in cutting edge technologies Position will be involved in design and implementation of low latency, high availability and high-performance services Development in very dynamic environment which provides ability to learn and implement new technologies Create RESTful APIs that provide unprecedented access to data via client apps. Produce efficient and a fully tested, and documented code. Be part of a talented and motivated Agile team, therefore a commitment to collaborative problem solving, sophisticate design, and the creation of quality products are essential. Requirements: 4+ years’ experience in back-end development 2+ years in NodeJS, Javascript ES6, Typescript. Expertise in using AI development tools. Experience in MongoDB, PostgreSQL, MySQL or equivalent Strong experience with creating REST and RESTful services Strong understanding of microservices, event-driven architectures, serverless and container technologies (Lambda, Docker), and container orchestration platforms such as Kubernetes, OpenShift, or equivalent Familiarity with CI/CD pipelines and related tools for unit testing (e.g. JUnit), static and dynamic code scanning (e.g. AppScan, Fortify), and build tools such as Jenkins. Familiarity with AWS SDKs Experience with AWS services such as EKS, RDS, API GW Experience in google cloud, Firebase services AWS Certified Developer/Solution Architect - Big Advantage Experience scaling up a B2B2C and B2C solutions - Big Advantage DarioHealth promotes diversity of thought, culture and background, which connects the entire Dario team. We believe that every member on our team enriches our diversity by exposing us to a broad range of ways to understand and engage with the world, identify challenges, and to discover, design and deliver solutions. We are passionate about building and sustaining an inclusive and equitable working and learning environments for all people, and do not discriminate against any employee or job candidate. ***
Posted 6 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Are you passionate about building and maintaining large-scale production systems that support advanced data science and machine learning applications? Do you want to join a team at the heart of NVIDIA's data-driven decision-making culture? If so, we have a great opportunity for you! NVIDIA is seeking a Senior Site Reliability Engineer (SRE) for the Data Science & ML Platform(s) team. The role involves designing, building, and maintaining services that enable real-time data analytics, streaming, data lakes, observability and ML/AI training and inferencing. The responsibilities include implementing software and systems engineering practices to ensure high efficiency and availability of the platform, as well as applying SRE principles to improve production systems and optimize service SLOs. Additionally, collaboration with our customers to plan implement changes to the existing system, while monitoring capacity, latency, and performance is part of the role. To succeed in this position, a strong background in SRE practices, systems, networking, coding, capacity management, cloud operations, continuous delivery and deployment, and open-source cloud enabling technologies like Kubernetes and OpenStack is required. Deep understanding of the challenges and standard methodologies of running large-scale distributed systems in production, solving complex issues, automating repetitive tasks, and proactively identifying potential outages is also necessary. Furthermore, excellent communication and collaboration skills, and a culture of diversity, intellectual curiosity, problem solving, and openness are essential. As a Senior SRE at NVIDIA, you will have the opportunity to work on innovative technologies that power the future of AI and data science, and be part of a dynamic and supportive team that values learning and growth. The role provides the autonomy to work on meaningful projects with the support and mentorship needed to succeed, and contributes to a culture of blameless postmortems, iterative improvement, and risk-taking. If you are seeking an exciting and rewarding career that makes a difference, we invite you to apply now! What You’ll Be Doing Develop software solutions to ensure reliability and operability of large-scale systems supporting machine-critical use cases. Gain a deep understanding of our system operations, scalability, interactions, and failures to identify improvement opportunities and risks. Create tools and automation to reduce operational overhead and eliminate manual tasks. Establish frameworks, processes, and standard methodologies to enhance operational maturity, team efficiency, and accelerate innovation. Define meaningful and actionable reliability metrics to track and improve system and service reliability. Oversee capacity and performance management to facilitate infrastructure scaling across public and private clouds globally. Build tools to improve our service observability for faster issue resolution. Practice sustainable incident response and blameless postmortems What We Need To See Minimum of 6+ years of experience in SRE, Cloud platforms, or DevOps with large-scale microservices in production environments. Master's or Bachelor's degree in Computer Science or Electrical Engineering or CE or equivalent experience. Strong understanding of SRE principles, including error budgets, SLOs, and SLAs. Proficiency in incident, change, and problem management processes. Skilled in problem-solving, root cause analysis, and optimization. Experience with streaming data infrastructure services, such as Kafka and Spark. Expertise in building and operating large-scale observability platforms for monitoring and logging (e.g., ELK, Prometheus). Proficiency in programming languages such as Python, Go, Perl, or Ruby. Hands-on experience with scaling distributed systems in public, private, or hybrid cloud environments. Experience in deploying, supporting, and supervising services, platforms, and application stacks. Ways To Stand Out From The Crowd Experience operating large-scale distributed systems with strong SLAs. Excellent coding skills in Python and Go and extensive experience in operating data platforms. Knowledge of CI/CD systems, such as Jenkins and GitHub Actions. Familiarity with Infrastructure as Code (IaC) methodologies and tools. Excellent interpersonal skills for identifying and communicating data-driven insights. NVIDIA leads the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing, and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions, from artificial intelligence to autonomous cars. NVIDIA is looking for exceptional people like you to help us accelerate the next wave of artificial intelligence. JR1999109
Posted 6 days ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Senior Site Reliability Engineer (SRE) – Azure Focused Location :Pune Experience : 7–14 Years Notice Period : Immediate to 30 Days Key Responsibilities Ensure availability, latency, performance, and efficiency of global eCommerce sites Design and develop E2E observability dashboards and tooling Maintain error budgets, meet SLOs, and drive incident response automation Collaborate with engineering teams to build highly reliable systems Drive proactive monitoring, root cause analysis (RCA), and system optimization Build tools to improve incident management and software delivery processes Optimize cloud infrastructure for performance and cost, primarily in Azure Promote observability best practices and help define instrumentation standards Required Skills 7–14 years in Site Reliability Engineering or DevOps Experience supporting cloud production environments (Azure preferred) Expertise with monitoring tools: Splunk, Dynatrace, Datadog, Grafana, New Relic Strong scripting skills – Python preferred (Shell acceptable) Hands-on with CI/CD tools – GitLab, Jenkins, Azure DevOps, etc. Proficient in Kubernetes, Docker, Terraform, and Ansible Knowledge of configuration management – Ansible, Chef, or AWS CodeDeploy Proven troubleshooting skills with strong ownership mindset Passionate about automation, observability, and platform reliability
Posted 6 days ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Responsibilities Write reusable, testable, and efficient code. Design and implement low-latency, high-availability, and performant applications. Design and create RESTful APIs for internal and partner consumption. Implement security and data protection. Debug code on the platform (written by self or others) to find the root cause of any ongoing issues and rectify them. Database query optimization & design and implement scalable database schemas that represent and support business processes. Implement web applications in Python, SQL, Javascript, HTML, and CSS. Provide technical leadership to teammates through coaching and mentorship. Delegate tasks and set deadlines. Monitor team performance and report on performance. Collaborate with other software developers, business analysts to plan, design and develop applications. Maintain client relationships and ensure Company deliverables meet highest expectations of the client. Qualification & Skills Mandatory 3+ years experience in Django/Flask. Solid database skills in relational databases. Knowledge of how to build and use RESTful APIs. Strong knowledge of version control. Hands-on experience working on Linux systems. Familiarity with ORM (Object Relational Mapper) libraries. Experience with SQL Alchemy is a plus. Knowledge of Redis Strong understanding of peer review best practices Hands-on experience in deployment processes. Good to Have Proficiency in AWS, Azure, or GCP (any one) Experience with Docker
Posted 6 days ago
15.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Drive the Future of Data-Driven Entertainment Are you passionate about working with big data? Do you want to shape the direction of products that impact millions of users daily? If so, we want to connect with you. We’re seeking a leader for our Data Engineering team who will collaborate with Product Managers, Data Scientists, Software Engineers, and ML Engineers to support our AI infrastructure roadmap. In this role, you’ll design and implement the data architecture that guides decision-making and drives insights, directly impacting our platform’s growth and enriching user experiences. As a part of SonyLIV, you’ll work with some of the brightest minds in the industry, access one of the most comprehensive data sets in the world and leverage cutting-edge technology. Your contributions will have a tangible effect on the products we deliver and the viewers we engage. The ideal candidate will bring a strong foundation in data infrastructure and data architecture, a proven record of leading and scaling data teams, operational excellence to enhance efficiency and speed, and a visionary approach to how Data Engineering can drive company success. If you’re ready to make a significant impact in the world of OTT and entertainment, let’s talk. AVP, Data Engineering – SonyLIV Location: Bangalore Responsibilities: Define the Technical Vision for Scalable Data Infrastructure: Establish a robust technical strategy for SonyLIV’s data and analytics platform, architecting a scalable, high-performance data ecosystem using modern technologies like Spark, Kafka, Snowflake, and cloud services (AWS/GCP). Lead Innovation in Data Processing and Architecture: Advance SonyLIV’s data engineering practices by implementing real-time data processing, optimized ETL pipelines, and streaming analytics through tools like Apache Airflow, Spark, and Kubernetes. Enable high-speed data processing to support real-time insights for content and user engagement. Ensure Operational Excellence in Data Systems: Set and enforce standards for data reliability, privacy, and performance. Define SLAs for production data processes, using monitoring tools (Grafana, Prometheus) to maintain system health and quickly resolve issues. Build and Mentor a High-Caliber Data Engineering Team: Recruit and lead a skilled team with strengths in distributed computing, cloud infrastructure, and data security. Foster a collaborative and innovative culture, focused on technical excellence and efficiency. Collaborate with Cross-Functional Teams: Partner closely with Data Scientists, Software Engineers, and Product Managers to deliver scalable data solutions for personalization algorithms, recommendation engines, and content analytics. Architect and Manage Production Data Models and Pipelines: Design and launch production-ready data models and pipelines capable of supporting millions of users. Utilize advanced storage and retrieval solutions like Hive, Presto, and BigQuery to ensure efficient data access. Drive Data Quality and Business Insights: Implement automated quality frameworks to maintain data accuracy and reliability. Oversee the creation of BI dashboards and data visualizations using tools like Tableau and Looker, providing actionable insights into user engagement and content performance. This role offers the opportunity to lead SonyLIV’s data engineering strategy, driving technological innovation and operational excellence while enabling data-driven decisions that shape the future of OTT entertainment. Minimum Qualifications: 15+ years of progressive experience in data engineering, business intelligence, and data warehousing, including significant expertise in high-volume, real-time data environments. Proven track record in building, scaling, and managing large data engineering teams (10+ members), including experience managing managers and guiding teams through complex data challenges. Demonstrated success in designing and implementing scalable data architectures, with hands-on experience using modern data technologies (e.g., Spark, Kafka, Redshift, Snowflake, BigQuery) for data ingestion, transformation, and storage. Advanced proficiency in SQL and experience with at least one object-oriented programming language (Python, Java, or similar) for custom data solutions and pipeline optimization. Strong experience in establishing and enforcing SLAs for data availability, accuracy, and latency, with a focus on data reliability and operational excellence. Extensive knowledge of A/B testing methodologies and statistical analysis, including a solid understanding of the application of these techniques for user engagement and content analytics in OTT environments. Skilled in data governance, data privacy, and compliance, with hands-on experience implementing security protocols and controls within large data ecosystems. Preferred Qualifications: Bachelor's or Master’s degree in Computer Science, Mathematics, Physics, or a related technical field. Experience managing the end-to-end data engineering lifecycle, from model design and data ingestion through to visualization and reporting. Experience working with large-scale infrastructure, including cloud data warehousing, distributed computing, and advanced storage solutions. Familiarity with automated data lineage and data auditing tools to streamline data governance and improve transparency. Expertise with BI and visualization tools (e.g., Tableau, Looker) and advanced processing frameworks (e.g., Hive, Presto) for managing high-volume data sets and delivering insights across the organization. Why join us? CulverMax Entertainment Pvt Ltd (Formerly known as Sony Pictures Networks India) is home to some of India’s leading entertainment channels such as SET, SAB, MAX, PAL, PIX, Sony BBC Earth, Yay!, Sony Marathi, Sony SIX, Sony TEN, SONY TEN1, SONY Ten2, SONY TEN3, SONY TEN4, to name a few! Our foray into the OTT space with one of the most promising streaming platforms, Sony LIV brings us one step closer to being a progressive digitally led content powerhouse. Our independent production venture- Studio Next has already made its mark with original content and IPs for TV and Digital Media. But our quest to Go Beyond doesn’t end there. Neither does our search to find people who can take us there. We focus on creating an inclusive and equitable workplace where we celebrate diversity with our Bring Your Own Self Philosophy. We strive to remain an ‘Employer of Choice’ and have been recognized as: - India’s Best Companies to Work For 2021 by the Great Place to Work® Institute. - 100 Best Companies for Women in India by AVTAR & Seramount for 6 years in a row - UN Women Empowerment Principles Award 2022 for Gender Responsive Marketplace and Community Engagement & Partnership - ET Human Capital Awards 2023 for Excellence in HR Business Partnership & Team Building Engagement - ET Future Skills Awards 2022 for Best Learning Culture in an Organization and Best D&I Learning Initiative. The biggest award of course is the thrill our employees feel when they can Tell Stories Beyond the Ordinary!
Posted 6 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join us at Seismic, a cutting-edge technology company leading the way in the SaaS industry. We specialize in delivering modern, scalable, and multi-cloud solutions that empower businesses to succeed in today's digital era. Leveraging the latest advancements in technology, including Generative AI, we are committed to driving innovation and transforming the way businesses operate. As we embark on an exciting journey of growth and expansion, we are seeking top engineering talent to join our AI team in Hyderabad, India. As an Engineer II, you will play a crucial role in developing and optimizing backend systems that power our web application, including content discovery, knowledge management, learning and coaching, meeting intelligence and various AI capabilities. You will collaborate with cross-functional teams to design, build, and maintain scalable, high-performance systems that deliver exceptional value to our customers. This position offers a unique opportunity to make a significant impact on our company's growth and success by contributing to the technical excellence and innovation of our software solutions. If you are a passionate technologist with a strong track record of building AI products, and you thrive in a fast-paced, innovative environment, we want to hear from you! Seismic AI AI is one of the fastest growing product areas in Seismic. We believe that AI, particularly Generative AI, will empower and transform how Enterprise sales and marketing organizations operate and interact with customers. Seismic Aura, our leading AI engine, is powering this change in the sales enablement space and is being infused across the Seismic enablement cloud. Our focus is to leverage AI across the Seismic platform to make our customers more productive and efficient in their day-to-day tasks, and to drive more successful sales outcomes. Why Join Us Opportunity to be a key technical leader in a rapidly growing company and drive innovation in the SaaS industry. Work with cutting-edge technologies and be at the forefront of AI advancements. Competitive compensation package, including salary, bonus, and equity options. A supportive, inclusive work culture. Professional development opportunities and career growth potential in a dynamic and collaborative environment. At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Distributed Systems Development: Design, develop, and maintain backend systems and services for AI, information extraction or information retrieval functionality, ensuring high performance, scalability, and reliability. Integration: Collaborate with data scientists, AI engineers, and product teams to integrate AI-driven capabilities across the Seismic platform. Performance Tuning: Monitor and optimize service performance, addressing bottlenecks and ensuring low-latency query responses. Technical Leadership: Provide technical guidance and mentorship to junior engineers, promoting best practices in software backend development. Collaboration: Work closely with cross-functional and geographically distributed teams, including product managers, frontend engineers, and UX designers, to deliver seamless and intuitive experiences. Continuous Improvement: Stay updated with the latest trends and advancements in software and technologies, conducting research and experimentation to drive innovation. Experience: 2+ years of experience in software engineering and a proven track record of building and scaling microservices and working with data retrieval systems. Technical Expertise: Experience with C# and .NET, unit testing, object-oriented programming, and relational databases. Experience with Infrastructure as Code (Terraform, Pulumi, etc.), event driven architectures with tools like Kafka, feature management (Launch Darkly) is good to have. Front-end/full stack experience a plus. Cloud Expertise: Experience with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure. Knowledge of cloud-native services for AI/ML, data storage, and processing. Experience deploying containerized applications into Kubernetes is a plus. AI: Proficiency in building and deploying Generative AI use cases is a plus. Experience with Natural Language Processing (NLP). Semantic search with platforms like ElasticSearch is a plus. SaaS Knowledge: Extensive experience in SaaS application development and cloud technologies, with a deep understanding of modern distributed systems and cloud operational infrastructure. Product Development: Experience in collaborating with product management and design, with the ability to translate business requirements into technical solutions that drive successful delivery. Proven record of driving feature development from concept to launch. Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Fast-paced Environment: Experience working in a fast-paced, dynamic environment, preferably in a SaaS or technology-driven company. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Posted 6 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description About Sutherland Artificial Intelligence. Automation.Cloud engineering. Advanced analytics.For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence. We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model. For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA.We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships. Sutherland Unlocking digital performance. Delivering measurable results. Job Description We are looking for a proactive and detail-oriented AI OPS Engineer to support the deployment, monitoring, and maintenance of AI/ML models in production. Reporting to the AI Developer, this role will focus on MLOps practices including model versioning, CI/CD, observability, and performance optimization in cloud and hybrid environments. Key Responsibilities: Build and manage CI/CD pipelines for ML models using platforms like MLflow, Kubeflow, or SageMaker. Monitor model performance and health using observability tools and dashboards. Ensure automated retraining, version control, rollback strategies, and audit logging for production models. Support deployment of LLMs, RAG pipelines, and agentic AI systems in scalable, containerized environments. Collaborate with AI Developers and Architects to ensure reliable and secure integration of models into enterprise systems. Troubleshoot runtime issues, latency, and accuracy drift in model predictions and APIs. Contribute to infrastructure automation using Terraform, Docker, Kubernetes, or similar technologies. Qualifications Required Qualifications: 3–5 years of experience in DevOps, MLOps, or platform engineering roles with exposure to AI/ML workflows. Hands-on experience with deployment tools like Jenkins, Argo, GitHub Actions, or Azure DevOps. Strong scripting skills (Python, Bash) and familiarity with cloud environments (AWS, Azure, GCP). Understanding of containerization, service orchestration, and monitoring tools (Prometheus, Grafana, ELK). Bachelor’s degree in computer science, IT, or a related field. Preferred Skills: Experience supporting GenAI or LLM applications in production. Familiarity with vector databases, model registries, and feature stores. Exposure to security and compliance standards in model lifecycle management
Posted 6 days ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position: Staff Engineer - Data, Digital Business Role Overview - Role involves leading SonyLIV's data engineering strategy, architecting scalable data infrastructure, driving innovation in data processing, ensuring operational excellence, and fostering a high-performance team to enable data-driven insights for OTT content and user engagement. Location - Mumbai Experience - 8+ years Responsibilities: Define the Technical Vision for Scalable Data Infrastructure: Establish a robust technical strategy for SonyLIV’s data and analytics platform, architecting a scalable, high-performance data ecosystem using modern technologies like Spark, Kafka, Snowflake, and cloud services (AWS/GCP). Lead Innovation in Data Processing and Architecture: Advance SonyLIV’s data engineering practices by implementing real-time data processing, optimized ETL pipelines, and streaming analytics through tools like Apache Airflow, Spark, and Kubernetes. Enable high-speed data processing to support real-time insights for content and user engagement. Ensure Operational Excellence in Data Systems: Set and enforce standards for data reliability, privacy, and performance. Define SLAs for production data processes, using monitoring tools (Grafana, Prometheus) to maintain system health and quickly resolve issues. Build and Mentor a High-Caliber Data Engineering Team: Recruit and lead a skilled team with strengths in distributed computing, cloud infrastructure, and data security. Foster a collaborative and innovative culture, focused on technical excellence and efficiency. Collaborate with Cross-Functional Teams: Partner closely with Data Scientists, Software Engineers, and Product Managers to deliver scalable data solutions for personalization algorithms, recommendation engines, and content analytics. Architect and Manage Production Data Models and Pipelines: Design and launch production-ready data models and pipelines capable of supporting millions of users. Utilize advanced storage and retrieval solutions like Hive, Presto, and BigQuery to ensure efficient data access. Drive Data Quality and Business Insights: Implement automated quality frameworks to maintain data accuracy and reliability. Oversee the creation of BI dashboards and data visualizations using tools like Tableau and Looker, providing actionable insights into user engagement and content performance. This role offers the opportunity to lead SonyLIV’s data engineering strategy, driving technological innovation and operational excellence while enabling data-driven decisions that shape the future of OTT entertainment. Minimum Qualifications: 8+ years of progressive experience in data engineering, business intelligence, and data warehousing, including significant expertise in high-volume, real-time data environments. Proven track record in building, scaling, and managing large data engineering teams (10+ members), including experience managing managers and guiding teams through complex data challenges. Demonstrated success in designing and implementing scalable data architectures, with hands-on experience using modern data technologies (e.g., Spark, Kafka, Redshift, Snowflake, BigQuery) for data ingestion, transformation, and storage. Advanced proficiency in SQL and experience with at least one object-oriented programming language (Python, Java, or similar) for custom data solutions and pipeline optimization. Strong experience in establishing and enforcing SLAs for data availability, accuracy, and latency, with a focus on data reliability and operational excellence. Extensive knowledge of A/B testing methodologies and statistical analysis, including a solid understanding of the application of these techniques for user engagement and content analytics in OTT environments. Skilled in data governance, data privacy, and compliance, with hands-on experience implementing security protocols and controls within large data ecosystems. Preferred Qualifications: Bachelor's or master’s degree in computer science, Mathematics, Physics, or a related technical field. Experience managing the end-to-end data engineering lifecycle, from model design and data ingestion through to visualization and reporting. Experience working with large-scale infrastructure, including cloud data warehousing, distributed computing, and advanced storage solutions. Familiarity with automated data lineage and data auditing tools to streamline data governance and improve transparency. Expertise with BI and visualization tools (e.g., Tableau, Looker) and advanced processing frameworks (e.g., Hive, Presto) for managing high-volume data sets and delivering insights across the organization. Why SPNI? Join Our Team at SonyLIV Drive the Future of Data-Driven Entertainment Are you passionate about working with big data? Do you want to shape the direction of products that impact millions of users daily? If so, we want to connect with you. We’re seeking a leader for our Data Engineering team who will collaborate with Product Managers, Data Scientists, Software Engineers, and ML Engineers to support our AI infrastructure roadmap. In this role, you’ll design and implement the data architecture that guides decision-making and drives insights, directly impacting our platform’s growth and enriching user experiences. As a part of SonyLIV, you’ll work with some of the brightest minds in the industry, access one of the most comprehensive data sets in the world and leverage cutting-edge technology. Your contributions will have a tangible effect on the products we deliver and the viewers we engage. The ideal candidate will bring a strong foundation in data infrastructure and data architecture, a proven record of leading and scaling data teams, operational excellence to enhance efficiency an
Posted 6 days ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
WHAT MAKES US A GREAT PLACE TO WORK We are proud to be consistently recognized as one of the world’s best places to work. We are currently the #1 ranked consulting firm on Glassdoor’s Best Places to Work list and have maintained a spot in the top four on Glassdoor’s list since its founding in 2009. Extraordinary teams are at the heart of our business strategy, but these don’t happen by chance. They require intentional focus on bringing together a broad set of backgrounds, cultures, experiences, perspectives, and skills in a supportive and inclusive work environment. We hire people with exceptional talent and create an environment in which every individual can thrive professionally and personally. WHO YOU’LL WORK WITH You’ll join our Application Engineering experts within the AI, Insights & Solutions team. This team is part of Bain’s digital capabilities practice, which includes experts in analytics, engineering, product management, and design. In this multidisciplinary environment, you'll leverage deep technical expertise with business acumen to help clients tackle their most transformative challenges. You’ll work on integrated teams alongside our general consultants and clients to develop data-driven strategies and innovative solutions. Together, we create human-centric solutions that harness the power of data and artificial intelligence to drive competitive advantage for our clients. Our collaborative and supportive work environment fosters creativity and continuous learning, enabling us to consistently deliver exceptional results. WHAT YOU’LL DO Design, develop, and maintain cloud-based AI applications, leveraging a full-stack technology stack to deliver high-quality, scalable, and secure solutions. Collaborate with cross-functional teams, including product managers, data scientists, and other engineers, to define and implement analytics features and functionality that meet business requirements and user needs. Utilize Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability. Develop and maintain APIs and microservices to expose analytics functionality to internal and external consumers, adhering to best practices for API design and documentation. Implement robust security measures to protect sensitive data and ensure compliance with data privacy regulations and organizational policies. Continuously monitor and troubleshoot application performance, identifying and resolving issues that impact system reliability, latency, and user experience. Participate in code reviews and contribute to the establishment and enforcement of coding standards and best practices to ensure high-quality, maintainable code. Stay current with emerging trends and technologies in cloud computing, data analytics, and software engineering, and proactively identify opportunities to enhance the capabilities of the analytics platform. Collaborate with DevOps and infrastructure teams to automate deployment and release processes, implement CI/CD pipelines, and optimize the development workflow for the analytics engineering team. Collaborate closely with and influence business consulting staff and leaders as part of multi-disciplinary teams to assess opportunities and develop analytics solutions for Bain clients across a variety of sectors. Influence, educate and directly support the analytics application engineering capabilities of our clients Travel is required (30%) ABOUT YOU Required Master’s degree in Computer Science, Engineering, or a related technical field. 6+ years at Senior or Staff level, or equivalent Experience with client-side technologies such as React, Angular, Vue.js, HTML and CSS Experience with server-side technologies such as, Django, Flask, Fast API Experience with cloud platforms and services (AWS, Azure, GCP) via Terraform Automation (good to have) 3+ years of Python expertise Use Git as your main tool for versioning and collaborating Experience with DevOps, CI/CD, Github Actions Demonstrated interest with LLMs, Prompt engineering, Langchain Experience with workflow orchestration - doesn’t matter if it’s dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or any other Experience implementation of large-scale structured or unstructured databases, orchestration and container technologies such as Docker or Kubernetes Strong interpersonal and communication skills, including the ability to explain and discuss complex engineering technicalities with colleagues and clients from other disciplines at their level of cognition Curiosity, proactivity and critical thinking Strong computer science fundaments in data structures, algorithms, automated testing, object-oriented programming, performance complexity, and implications of computer architecture on software performance. Strong knowledge in designing API interfaces Knowledge of data architecture, database schema design and database scalability Agile development methodologies
Posted 6 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Seller Flex team located in Bangalore is looking for a SDE to deliver strategic goals for Amazon eCommerce systems. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives and building and launching customer facing products in international locales, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and automated large scale eCommerce business. We are looking for a SDE1 to design and build our tech stack as a coherent architecture and deliver capabilities across marketplaces. We operate in a high performance co-located agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Delivery Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovations. In this role, you will have front row seats to how we are disrupting eCommerce fulfilment and supply chain by offering creative solutions to yet – unsolved problems. We operate like a start-up within the Amazon ecosystem and have proven track record of delivering inventions that work globally. You will be challenged to look at the world through the eyes of our seller customers and think outside the box to build new tech solutions to make our Sellers successful. You will often find yourself building products and services that are new to Amazon and will have an opportunity to pioneer not just the technology components but the idea itself across other Amazon teams and markets. See Below For a Couple Of Anecdotes, Should You Want To Hear a SDEʼs Perspective On What It Is Like To Work In This Team “I have worked on other global tech platforms at Amazon prior to SellerFlex and what I find extremely different and satisfying here is that in addition to the scale and complexity of work that I do and the customer impact it has, I am part of a team that makes SDEs owners of critical aspects of team functioning – whether it be designing and running engineering excellence programs for design reviews, COE, CR, MCM and Service launch bar raisers or the operational programs for the team. This has allowed me to develop myself not just on the tech or domain as a SDE but also as a wholesome Amazon tech leader for future challenges.” “It is extremely empowering to be a part of this team where I am challenged to learn and innovate in every project that I work on. I get to work across the tech stack and have end to end ownership of solution and tech choices. I hadnʼt worked on as many services in my previous team at Amazon as I have built from scratch, launched and scaled in this team . The team is in a great place where it is connected to customers closely, is building new stuff from scratch and has to deal with very light Ops burden due to the great architecture and design choices that are being made by SDEs” KEY REPONSIBILITIES Work closely with senior and principal engineers to architect and deliver high quality technology solutions Own development in multiple layers of the stack including distributed workflows hosted in native AWS architecture Operational rigor for a rapidly growing tech stack Contribute to patents, tech talks and innovation drives Assist in the continual hiring and development of technical talent Measure success metrics and influence evolution of the tech product Loop Competencies Basic Qualifications Bachelorʼs degree or higher in Computer Science and 1+ years of Software Development experience Proven track record of building large-scale, highly available, low latency, high quality distributed systems and software products Possess an extremely sound understanding of basic areas of Computer Science such as Algorithms, Data Structures, Object Oriented Design, Databases. Good understanding of AWS services such as EC2, S3, DynamoDB, Elasticsearch, Lambda, API Gateway, ECR, ECS, Lex etc. Excellent coding skills in an object oriented language such as Java and Scala Great problem solving skills and propensity to learn and develop tech talent Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of computer science fundamentals (object-oriented design, data structures, algorithm design, problem solving and complexity analysis) experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2980587
Posted 6 days ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Designation: Associate Vice President - Android, Digital Business (12+ years) Location: Gurugram / Bangalore About the Role Sony LIV is on a mission to deliver a world-class streaming experience to millions of users across devices. We’re looking for an experienced Android engineer with deep expertise in ExoPlayer/Media3 and strong command over the Android framework, who thrives in a hybrid role — leading cross-functional initiatives while also diving deep into code to deliver high-performance, scalable solutions. This is a unique opportunity to own critical areas of media playback and performance, contribute to architecture decisions, and drive collaboration across engineering, product, and QA. What You’ll Do Lead the design and implementation of advanced media playback workflows using ExoPlayer (Media3), ensuring low latency, seamless buffering, adaptive streaming, and DRM integrations. Drive performance improvements across app launch, playback, memory, and battery. Collaborate closely with product managers, iOS/web counterparts, backend, and QA to build delightful and robust video experiences. Mentor and guide a team of Android engineers — promote clean architecture, code quality, and modern development practices. Contribute individually to high-priority feature development and performance debugging. Stay ahead of Android platform updates and integrate Jetpack libraries, modern UI frameworks, and best practices (e.g., Kotlin Coroutines, Hilt, Jetpack Compose, Paging, etc.) Own cross-functional technical discussions for media strategy, caching, telemetry, offline, or A/V compliance. What We’re Looking For 10–15 years of Android development experience with strong fundamentals in Kotlin, ExoPlayer/Media3, and the Android media framework. Deep understanding of streaming protocols (HLS/DASH), adaptive bitrate streaming, DRM (Widevine), and analytics tagging. Experience in performance optimization – memory, power, cold start, and playback smoothness. Hands-on with modern Android stack: Jetpack Compose, Kotlin Flows, Work Manager, ViewModel, Room, Hilt/Dagger, etc. Familiarity with CI/CD, app modularization, crash analytics, and A/B experimentation frameworks (e.g., Firebase, AppCenter, etc.). Comfortable navigating ambiguity — can switch gears between IC and leadership responsibilities based on the team’s needs. Strong communication skills and ability to collaborate across teams and functions. Nice to Have Experience with Android TV / Fire TV or other large-screen form factors. Prior work on live streaming, low latency playback, or sports content. Familiarity with AV1, Dolby Vision/Atmos, or advanced video/audio codecs. Contributions to open-source media libraries or ExoPlayer itself. Why Sony? Sony Pictures Networks is home to some of India’s leading entertainment channels such as SET, SAB, MAX, PAL, PIX, Sony BBC Earth, Yay!, Sony Marathi, Sony SIX, Sony TEN, Sony TEN1, SONY Ten2, SONY TEN3, SONY TEN4, to name a few! Our foray into the OTT space with one of the most promising streaming platforms, Sony LIV brings us one step closer to being a progressive digitally-led content powerhouse. Our independent production venture- Studio Next has already made its mark with original content and IPs for TV and Digital Media. But our quest to Go Beyond doesn’t end there. Neither does our search to find people who can take us there. We focus on creating an inclusive and equitable workplace where we celebrate diversity with our Bring Your Own Self Philosophy and are recognised as a Great Place to Work. - Great Place to Work Institute- Ranked as one of the Great Places to Work for since 5 years - Included in the Hall of Fame as a part of the Working Mother & Avtar Best Companies for Women in India study- Ranked amongst 100 Best Companies for Women In India - ET Human Capital Awards 2021- Winner across multiple categories - Brandon Hall Group HCM Excellence Award - Outstanding Learning Practices. The biggest award of course is the thrill our employees feel when they can Tell Stories Beyond the Ordinary!
Posted 6 days ago
15.0 years
0 Lacs
India
Remote
About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: AWS Cloud Architecture Experience: 15+ Years Mandatory Skills ✔ 15+ years in Java Full Stack (Spring Boot, Microservices, ReactJS) ✔ Cloud Architecture: AWS EKS, Kubernetes, API Gateway (APIGEE/Tyk) ✔ Event Streaming: Kafka, RabbitMQ ✔ Database Mastery: PostgreSQL (performance tuning, scaling) ✔ DevOps: GitLab CI/CD, Terraform, Grafana/Prometheus ✔ Leadership: Technical mentoring, decision-making About the Role We are seeking a highly experienced AWS Cloud Architect with 15+ years of expertise in full-stack Java development , cloud-native architecture, and large-scale distributed systems. The ideal candidate will be a technical leader capable of designing, implementing, and optimizing high-performance cloud applications across on-premise and multi-cloud environments (AWS). This role requires deep hands-on skills in Java, Microservices, Kubernetes, Kafka, and observability tools, along with a strong architectural mindset to drive innovation and mentor engineering teams. Key Responsibilities ✅ Cloud-Native Architecture & Leadership: Lead the design, development, and deployment of scalable, fault-tolerant cloud applications (AWS EKS, Kubernetes, Serverless). Define best practices for microservices, event-driven architecture (Kafka), and API management (APIGEE/Tyk). Architect hybrid cloud solutions (on-premise + AWS/GCP) with security, cost optimization, and high availability. ✅ Full-Stack Development: Develop backend services using Java, Spring Boot, and PostgreSQL (performance tuning, indexing, replication). Build modern frontends with ReactJS (state management, performance optimization). Design REST/gRPC APIs and event-driven systems (Kafka, SQS). ✅ DevOps & Observability: Manage Kubernetes (EKS) clusters, Helm charts, and GitLab CI/CD pipelines. Implement Infrastructure as Code (IaC) using Terraform/CloudFormation. Set up monitoring (Grafana, Prometheus), logging (ELK), and alerting for production systems. ✅ Database & Performance Engineering: Optimize PostgreSQL for high throughput, replication, and low-latency queries. Troubleshoot database bottlenecks, caching (Redis), and connection pooling. Design data migration strategies (on-premise → cloud). ✅ Mentorship & Innovation: Mentor junior engineers and conduct architecture reviews. Drive POCs on emerging tech (Service Mesh, Serverless, AI/ML integrations). Collaborate with CTO/Architects on long-term technical roadmaps.
Posted 6 days ago
5.0 years
5 - 7 Lacs
Thiruvananthapuram
On-site
5 - 7 Years 1 Opening Trivandrum Role description Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes: Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Outputs Expected: Code: Code as per design Follow coding standards templates and checklists Review code – for team and peers Documentation: Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure: Define and govern configuration management plan Ensure compliance from the team Test: Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain relevance: Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project: Manage delivery of modules and/or manage user stories Manage Defects: Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate: Create and provide input for effort estimation for projects Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release: Execute and monitor release process Design: Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface with Customer: Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team: Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications: Take relevant domain/technology certification Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware’s. Strong analytical and problem-solving abilities Knowledge Examples: Appropriate software programs / modules Functional and technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments: Design, build, and maintain robust, reactive REST APIs using Spring WebFlux and Spring Boot Develop and optimize microservices that handle high throughput and low latency Write clean, testable, maintainable code in Java Integrate with MongoDB for CRUD operations, aggregation pipelines, and indexing strategies Apply best practices in API security, versioning, error handling, and documentation Collaborate with front-end developers, DevOps, QA, and product teams Troubleshoot and debug production issues, identify root causes, and deploy fixes quickly Required Skills & Experience: Strong programming experience in Java 17+ Proficiency in Spring Boot, Spring WebFlux, and Spring MVC Solid understanding of Reactive Programming principles Proven experience designing and implementing microservices architecture Hands-on expertise with MongoDB, including schema design and performance tuning Experience with RESTful API design and HTTP fundamentals Working knowledge of build tools like Maven or Gradle Good grasp of CI/CD pipelines and deployment strategies Skills Spring Webflux,Spring Boot,Kafka About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 6 days ago
3.0 years
0 Lacs
Hyderābād
On-site
Who We Are At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. How will you fulfill your potential Work with a global team of highly motivated platform engineers and software developers building integrated architectures for secure, scalable infrastructure services serving a diverse set of use cases. Partner with colleagues from across technology and risk to ensure an outstanding platform is delivered. Help to provide frictionless integration with the firm’s runtime, deployment and SDLC technologies. Collaborate on feature design and problem solving. Help to ensure reliability, define, measure, and meet service level objectives. Quality coding & integration, testing, release, and demise of software products supporting AWM functions. Engage in quality assurance and production troubleshooting. Help to communicate and promote best practices for software engineering across the Asset Management tech stack. Basic Qualifications A strong grounding in software engineering concepts and implementation of architecture design patterns. A good understanding of multiple aspects of software development in microservices architecture, full stack development experience, Identity / access management and technology risk. Sound SDLC and practices and tooling experience - version control, CI/CD and configuration management tools. Ability to communicate technical concepts effectively, both written and orally, as well as interpersonal skills required to collaborate effectively with colleagues across diverse technology teams. Experience meeting demands for high availability and scalable system requirements. Ability to reason about performance, security, and process interactions in complex distributed systems. Ability to understand and effectively debug both new and existing software. Experience with metrics and monitoring tooling, including the ability to use metrics to rationally derive system health and availability information. Experience in auditing and supporting software based on sound SRE principles. Preferred Qualifications 3+ Years of Experience using and/or supporting Java based frameworks & SQL / NOSQL data stores. Experience with deploying software to containerized environments - Kubernetes/Docker. Scripting skills using Python, Shell or bash. Experience with Terraform or similar infrastructure-as-code platforms. Experience building services using public cloud providers such as AWS, Azure or GCP. Goldman Sachs Engineering Culture At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here! © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity.
Posted 6 days ago
2.0 years
1 - 6 Lacs
Hyderābād
Remote
Software Engineer Hyderabad, Telangana, India Date posted Jul 17, 2025 Job number 1832398 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the data integration team builds data gravity on the Microsoft Cloud. Massive volumes of data are generated – not just from transactional systems of record, but also from the world around us. Our data integration products – Azure Data Factory and Power Query make it easy for customers to bring in, clean, shape, and join data, to extract intelligence. The Fabric Data Integration team is currently seeking a Software Engineer to join their team. This team is in charge of designing, building, and operating a next generation service that transfers large volumes of data from various source systems to target systems with minimal latency while providing a data centric orchestration platform. The team focuses on advanced data movement/replication scenarios while maintaining user-friendly interfaces. Working collaboratively, the team utilizes a range of technologies to deliver high-quality products at a fast pace. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Qualifications Required /Minimum Qualifications Bachelor's degree in computer science, or related technical discipline AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred/Additional Qualifications Bachelor's Degree in Computer Science or related technical field AND 1+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java OR master’s degree in computer science or related technical field AND 1+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java OR equivalent experience. 1+ years of experience in developing and shipping system level features in an enterprise production backend server system. Experience building Distributed Systems with reliable guarantees. Understanding of data structures, algorithms, and distributed systems. Solve problems by always leading with passion and empathy for customers. Have a desire to work collaboratively, solve problems with groups, find win/win solutions and celebrate successes. Enthusiasm, integrity, self-discipline, results-orientation in a fast-paced environment. 1+ years of experience in developing and shipping system level features in an enterprise production backend server system. 1+ years of experience building and supporting distributed cloud services with production grade. #azdat #azuredata #azdataintegration Responsibilities Build cloud scale products with focus on efficiency, reliability and security. Build and maintain end-to-end Build, Test and Deployment pipelines. Deploy and manage massive Hadoop, Spark and other clusters. Contribute to the architecture & design of the products. Triaging issues and implementing solutions to restore service with minimal disruption to the customer and business. Perform root cause analysis, trend analysis and post-mortems. Owning the components and driving them end to end, all the way from gathering requirements, development, testing, deployment to ensuring high quality and availability post deployment. Embody our culture and values Embody our culture and values Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 6 days ago
0 years
0 Lacs
India
Remote
Healthcare AI Fellowship (Engineering / Data Science) Full-time contract, 6 months (Remote) About iksa.ai Iksa is the AI operating layer for innovative Life Sciences organizations and Specialty Healthcare Providers to customize privacy-first agentic systems. Our leadership and core engineers have shipped together for years and keep the team deliberately small. What you’ll do Design, train and iterate on agentic pipelines Build evaluation frameworks and tooling adopted across the engineering group Collaborate daily with physicians, ML engineers and data architects Document and share technical insights internally (and publicly when appropriate) Must-have Strong Python plus mastery of at least one ML framework (PyTorch, TensorFlow, JAX, etc.) Demonstrated experience shipping applied AI or data-intensive products Proficiency in architecting, evaluating and deploying AI systems end-to-end Ownership mindset and comfort with rapid iteration Nice-to-have Prior exposure to healthcare data, standards or compliance constraints Familiarity with retrieval-augmented generation, property-graph modeling, vector databases, or agent orchestration Success in six months A production-grade multi-agent pipeline integrated into our core platform Reusable tooling or documentation adopted by the wider team Measurable gains in system performance, latency, or reliability Why iksa.ai Direct mentorship from seasoned healthcare and AI leaders Accelerated growth in a high-engagement, low-bureaucracy environment Influence over product direction and visibility across international roll-outs Apply now with your CV and a short note on a recent AI project you’re proud of. We review applications on a rolling basis and will reach out if there’s a strong fit.
Posted 6 days ago
3.0 years
6 - 9 Lacs
Gurgaon
On-site
Required Experience 3 - 7 Years Skills Linux, Kernel, device drivers + 10 more Seeking an experienced Embedded Linux Test Engineer to validate and quality-assure Yocto‑based Linux BSP across diverse SoCs (e.g., QCS6490, QRB5165, QCS8550). The ideal candidate will design and execute comprehensive test plans, drive development of test infrastructure, and collaborate with firmware/kernel teams to ensure robust, reliable SoC platform support. Key Responsibilities Develop test plans and test cases for system, integration, and regression testing on mobile and IoT-class SoCs (e.g., camera, multimedia, networking, connectivity). Flash and boot Yocto-generated images (e.g., qcom-multimedia-image, real-time variants) on hardware evaluation kits. Validate key subsystems: bootloader, kernel, drivers (Wi‑Fi, Bluetooth, camera, display), power management, real-time functionality. Build and maintain automation frameworks: kernel image deployment, logging, instrumentation, hardware reset, network interfaces. Track and report software/hardware defects; work with cross-functional engineering teams to triage and resolve issues. Analyze system logs, trace output, measure boot/latency, resource utilization and performance metrics. Maintain test infrastructure and CI pipelines, ensuring reproducibility and efficiency. Contribute to documentation: test reports, acceptance criteria, qualification artifacts, and release summaries Mandatory Skills Strong C/C++ & scripting (Python, Bash) Yocto & BitBake workflows, experience building BSPs and flashing images on development boards Linux kernel internals, drivers, real-time patches Experience with Qualcomm SoCs or similar ARM platforms; Hands-on knowledge of QCS/QRB platforms and multimedia pipelines Hardware bring-up, serial consoles, bootloader debugging (U-Boot) GitLab/ Jenkins / Buildbot, hardware-triggered automation Performance analysis and profiling tools Ability to measure boot time, trace latency, optimize kernel subsystems Nice-to-Have Skills Experience debugging multimedia subsystems (camera, display, audio, video pipelines). Familiarity with Debian/Ubuntu-based host build environments. Knowledge of Qualcomm-specific test tools and manifest workflows (e.g., meta-qcom-realtime, qcom-manifest) Prior work in IoT/robotics, real-time or safety-critical embedded platforms. Exposure to certification/regulatory testing (e.g., FCC, Bluetooth SIG, Wi‑Fi Alliance).
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Job Description Development Build FinTech solutions for banking, trading, and finance across all segments of the global market. These include award winning web & mobile applications, data science and analytics, complex event processing, cloud solutions, low latency applications, and responsive experiences. Work with global development teams and business partners across USA, UK, Europe and Asia Pacific Capture and translate business / functional requirements for banking, trading, markets Good at problem solving and quantitative skills Design and architect solutions based on requirements or based on your innovative ideas Develop software in agile and iterative cycles using continuous improvement tools and techniques Test software using test driven development and embedded QA teams Identify, escalate, and resolve incidents and issues Participate in innovation programs, developer forums, Hackathons Good written and verbal communications skills with good positive attitude We work on cutting edge technologies like AI, Machine Learning, Hadoop, Python, Scala, Pega, .NET, Java, Angular, React, Cassandra, memSQL, Tableau, ETL and among several others Business Analysis Change enabler in an organizational context by defining needs and recommending solutions that delivers value to clients. Good at problem solving and quantitative skills Work closely with the business to capture requirements Analyze business and functional requirements provided from the business Document functional and operational impacts to associates and customers Assist in completion and documentation of designs (functional and technical) Provide expert knowledge on assigned application(s), functionality and associate/customer processes Develop expert knowledge on business processes, rules, and regulations Document the interaction of data, functions and business processes for selected functionality Prepare analysis schedule Conduct the feasibility study of the current system Track issues / reporting Good written and verbal communications skills with good positive attitude Opportunity to utilize tools like Microsoft Visio (Diagramming) & cutting-edge Change Management / Wireframingtools (Mockups) Testing Functional & Technical Specialist in discovering the unexpected & bring confidence in software Good at problem solving and quantitative skills Verify that the application meets all functional business requirements Ensure that all component changes are tested against areas impacted and that solutions work from an integration/operations perspective Include the scope, test cycles, risks, regression testing approach, environment requirements, data requirements, metrics, and work plan Develop test conditions and build test scripts based on functional design specifications and the test approach Confirm the architectural stability of the system with a focus on functional, load testing, fail-over/recoverability and operational testing. In some systems will also monitor, measure, and optimize individual and combined hardware and/or software components for optimal performance Perform unit testing and component integration testing Design and Develop Technical Test Approach, Load Tests, Fail-over and Recoverability Tests and Operational Tests Document and execute Test Scripts & report the execution progress Identify & escalate stoppers / concerns /issues to the project management team early. Ability to work as a team player in an agile way of working. Serve as a quality gatekeeper for the application releases. Opportunity to validate the applications using latest tools & technologies like Selenium, Appium, SpecFlow, Lettuce, Cucumber, UFT, qTest, LoadRunner, SOA Tester, TOSCA, Test Complete, Java , Python ,VBScript & JIRA Infrastructure Operations Infrastructure & Environment control specialists supporting all streams Support the efforts of development teams through development and testing environment creation, hardware and software configuration, build and migration coordination and technical support Handle escalated production support issues Configure software for supporting specific developer applications Coordinate the migration of configuration changes across environments Migrate code from component integration test to systems integration test Install and configure server applications Track issues Good written and verbal communications skills with good positive attitude Opportunity to handle SVN, Citrix, Informatica, Autosys, SQL servers, Coral 8, TeamCity, Jenkins, AS 400, Unix, Oracle Production Support Front face of IT department and an all-rounder in support Provide application support to the production environment Maintain detailed support processes and operations framework to make sure the application availability 24/7 Production control to ensure applications are available and running at peak efficiency All aspects required to process batch production within application services Proactively monitor application availability, performance, response time, exceptions, faults and failures using a range of proprietary as well as third party monitoring tools Provide usage trend analysis and status reports Be part of incident Triages, provide relevant information and proper communication to stakeholders Good written and verbal communications skills with good positive attitude Opportunity to monitor & control using Geneos, Citrix, Sybase Central, SQL server, Coral 8, Tibco, Quartz, BOB job monitor, Appwatch, PEGA Cyber Security Defense and Assessment Front face for Cyber Security events, incidents and an all-rounder in technical & operational support Regular analysis of Cyber Security information Replying to general Cyber Security queries Assist in Cyber Security investigation Supporting Identity and Access Management Identify vulnerability in Cyber Security which requires remediation Recording and responding to Cyber Security events and incidents in timely fashion Review, monitor and maintain Cyber Security controls and their implementation Auditing of systems, services and processes against policy, best practice and standards in a methodical and clearly documented fashion Opportunity to work on different Cyber Security tools, like DLP products, Data Classification tools, Splunk, SIEM tools eg. ArcSight etc Cyber Security Technology Responsible for defining, documenting, and publicizing strategic roadmap for various cyber security technology stacks for Bank of America Contributing to the development of innovative software capabilities to secure Bank products using DevSecOps pipelines and automation Participating in rapid prototyping and product security software research and development projects Innovating new software-based capabilities to secure software containers from internal and external cyber-attacks by being able to detect, respond, and recover without human intervention or mission degradation Participating in the development of algorithms, interfaces and designs for cyber-secure and resilient software systems Performing collaborative design & development with other engineers and suppliers Joining a team performing cyber risk assessments and developing risk mitigation plans Performing analysis of systems and components for risks, vulnerabilities, and threats Supporting incident response and mitigation Monitor networks for security breaches and investigate a violation when one occurs Develop security standards and best practices Assist with maintaining a strong cybersecurity posture Assist in developing new policies, design processes, and procedures, and develop technical designs to secure the development environment and trainer systems Assess system vulnerabilities, implement risk mitigation strategies, and validate secure systems, and test security products and systems to detect security weakness We work on cutting edge technologies like Machine Learning, Hadoop, Python, Scala, Pega, .NET, Java, Angular, React, Cassandra, Tableau, ETL and among several others with exposure to web application security and secure platform development Job Locations Mumbai, Chennai, Gurugram, Gandhinagar (GIFT), Hyderabad. Campus Hiring Eligibility for students is as listed below: ✓ Final year Graduates from the Class of 2025 ONLY ✓ Must Have Major Specialization in Computer Science & Information Technology ONLY ✓ Must have scored 60% in the last semester OR CGPA of 6 on a scale of 10 in the last semester ✓ No Active Backlogs in any of the current or prior semesters Campus Job Description - Tech ✓ Students should be willing to join any of the roles/skills/segment as per company requirement ✓ Students should be willing to work in any shifts/night shifts as per company requirement ✓ Students should be willing to work in any locations namely – Mumbai, Chennai, Gurugram, Gandhinagar (GIFT), Hyderabad as per company requirement
Posted 6 days ago
5.0 years
0 Lacs
Noida
On-site
Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point into valuable insights. Join us and be part of a team that’s turning the vision of better healthcare into reality—one line of code at a time. Together, we’re shaping the future and making a meaningful impact on the world. About the Role Technology that once promised to simplify patient care has, in many cases, created more complexity. At Innovaccer, we tackle this challenge by leveraging the vast amount of healthcare data available and replacing long-standing issues with intelligent, data-driven solutions. Data is the foundation of our innovation. We are looking for a Senior AI Engineer who understands healthcare data and can build algorithms that personalize treatments based on a patient’s clinical and behavioral history. This role will help define and build the next generation of predictive analytics tools in healthcare. A Day in the Life Design and build scalable AI platform architecture to support ML development, agentic frameworks, and robust self-serve AI pipelines. Develop agentic frameworks and a catalog of AI agents tailored for healthcare use cases. Design and deploy high-performance, low-latency AI applications. Build and optimize ML/DL models, including generative models like Transformers and GANs. Construct and manage data ingestion and transformation pipelines for scalable AI solutions. Conduct experiments, statistical analysis, and derive insights to guide development. Collaborate with data scientists, engineers, product managers, and business stakeholders to translate AI innovations into real-world applications. Partner with business leaders and clients to understand pain points and co-create scalable AI-driven solutions. Experience with Docker, Kubernetes, AWS/Azure, Snowflake, and healthcare data systems. Preferred Skills Proficient in Python for building scalable, high-performance AI applications Experience with reinforcement learning and multi-agent systems LLM optimization and deployment at scale Familiarity with healthcare data and real-world AI use cases Requirements What You Need Master’s in Computer Science, Engineering, or a related field. 5+ years of software engineering experience with strong API development skills. 3+ years of experience in data science, including at least 1+ year building generative AI pipelines, agents, and RAG systems. Strong Python programming skills, particularly in enterprise application development and optimization. Experience with: LLMs, prompt engineering, and fine-tuning SLMs Frameworks like LangChain, CrewAI, or Autogen (experience with at least one is required) Vector databases (e.g., FAISS, ChromaDB) Embedding models and Retrieval-Augmented Generation (RAG) design Familiarity with at least one ML platform (Databricks, Azure ML, SageMaker) Benefits Here’s What We Offer Generous Leave Policy: Up to 40 days of leave annually Parental Leave: One of the industry’s best parental leave policies Sabbatical Leave: Take time off for upskilling, research, or personal pursuits Health Insurance: Comprehensive coverage for you and your family Pet-Friendly Office*: Bring your furry friends to our Noida office Creche Facility for Children*: On-site care at our India offices Pet-friendly and creche facilities are available at select locations only (e.g., Noida for pets)
Posted 6 days ago
0 years
4 - 15 Lacs
Ahmedabad
Remote
Key Responsibilities Design and build microservices using Java (Spring Boot), following well-established patterns and practices. Design and implement real-time features using WebSocket for low-latency, bidirectional communication. Implement background and scheduled tasks using ScheduledExecutorService for precise, programmatic control. Apply at least one microservice design pattern (e.g., Circuit Breaker, CQRS, Saga, API Gateway) effectively as part of system architecture. Implement clean service boundaries, well-defined APIs, and asynchronous communication (REST/Kafka/etc.). Contribute to decisions around service granularity, data consistency, and fault tolerance. Write maintainable, testable, and secure code that meets business and technical requirements. Participate in code reviews, design discussions, and production troubleshooting. Collaborate with DevOps to ensure smooth deployment, monitoring, and observability of services. Mentor junior engineers and share technical insights within the team. Skills & Experience Required Strong programming skills in Java (11 or higher) and experience with Spring Boot,Spring Cloud Gateway for building RESTful microservices. Practical knowledge and application of at least one microservices design pattern (e.g., Circuit Breaker, API Gateway, Saga, CQRS, Service Mesh). Hands-on experience with WebSocket in Java (preferably using Spring WebSocket or Netty). Proficiency in ScheduledExecutorService for scheduling and managing background jobs. Experience with event-driven systems using Kafka or similar messaging platforms. NoSQL (MongoDB/Redis) .*Kubernetes .* Proficiency in working with RDBMS (PostgreSQL/MS SQL) and optionally Familiarity with containerized environments using Docker, with working knowledge of Understanding of authentication and authorization principles (OAuth2, JWT). Hands-on experience with CI/CD pipelines and monitoring/logging tools like Prometheus, Grafana, ELK, etc. Strong problem-solving mindset and experience in troubleshooting distributed systems. Job Type: Full-time Pay: ₹406,693.54 - ₹1,558,951.04 per year Benefits: Health insurance Provident Fund Work from home Work Location: In person Speak with the employer +91 7990654574
Posted 6 days ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
About Team The Myntra Data Science team is at the forefront of innovation, delivering cutting-edge solutions that drive significant revenue and enhance customer experiences across various touchpoints. Every quarter, our models impact millions of customers, leveraging real-time, near-real-time, and offline solutions with diverse latency requirements. These models are built on massive datasets, allowing for deep learning and growth opportunities within a rapidly expanding organization. By joining our team, you'll gain hands-on experience with an extensive e-commerce platform, learning to develop models that handle millions of requests per second with sub-second latency. We take pride in deploying solutions that not only utilize state-of-the-art machine learning techniques—such as graph neural networks, diffusion models, transformers, representation learning, optimization methods, and Bayesian modeling—but also contribute to the research community with multiple peer-reviewed publications. Roles and Responsibilities Design, develop, and deploy advanced machine learning models and algorithms for Forecasting, Operations Research, and Time Series applications. Build and implement scalable solutions for supply chain optimization, demand forecasting, pricing, and trend prediction. Develop efficient forecasting models leveraging traditional and deep learning-based time series analysis techniques. Utilize optimization techniques for large-scale nonlinear and integer programming problems. Hands-on experience with optimization solvers like CPLEX, Gurobi, COIN-OR, or similar tools. Collaborate with Product, Engineering, and Business teams to understand challenges and integrate ML solutions effectively. Maintain and optimize machine learning pipelines, including data cleaning, feature extraction, and model training. Implement CI/CD pipelines for automated testing, deployment, and integration of machine learning models. Work closely with the Data Platforms team to collect, process, and analyze data crucial for model development. Stay up to date with the latest advancements in machine learning, forecasting, and optimization techniques, sharing insights with the team. Qualifications & Experience Experience with a Bachelor’s degree in Statistics, Operations Research, Mathematics, Computer Science, or a related field. Strong foundation in data structures, algorithms, and efficient processing of large datasets. Proficiency in Python for data science and machine learning applications. Experience in developing and deploying forecasting and time series models. Knowledge of ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Hands-on experience with optimization solvers and algorithms for supply chain and logistics problems. Strong problem-solving skills with a focus on applying OR techniques to real-world business challenges. Good to have research publications in Machine Learning, Forecasting, or Operations Research. Familiarity with cloud computing services (AWS, Google Cloud) and distributed systems. Strong communication skills with the ability to work independently and collaboratively in a team environment. Nice to Have Experience with Generative AI and Large Language Models (LLMs). Knowledge of ML orchestration tools such as Airflow, Kubeflow, and MLflow. Exposure to NLP and Computer Vision applications in an e-commerce setting. Understanding of ethical considerations in AI, including bias, fairness, and privacy. Exceptional candidates are encouraged to apply, even if they don’t meet every listed qualification. We value potential and a strong willingness to learn.
Posted 6 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Code Outputs Expected: Code as per design Follow coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure Define and govern configuration management plan Ensure compliance from the team Test Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project Manage delivery of modules and/or manage user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort estimation for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface With Customer Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications Take relevant domain/technology certification Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware’s. Strong analytical and problem-solving abilities Knowledge Examples Appropriate software programs / modules Functional and technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments Design, build, and maintain robust, reactive REST APIs using Spring WebFlux and Spring Boot Develop and optimize microservices that handle high throughput and low latency Write clean, testable, maintainable code in Java Integrate with MongoDB for CRUD operations, aggregation pipelines, and indexing strategies Apply best practices in API security, versioning, error handling, and documentation Collaborate with front-end developers, DevOps, QA, and product teams Troubleshoot and debug production issues, identify root causes, and deploy fixes quickly Required Skills & Experience: Strong programming experience in Java 17+ Proficiency in Spring Boot, Spring WebFlux, and Spring MVC Solid understanding of Reactive Programming principles Proven experience designing and implementing microservices architecture Hands-on expertise with MongoDB, including schema design and performance tuning Experience with RESTful API design and HTTP fundamentals Working knowledge of build tools like Maven or Gradle Good grasp of CI/CD pipelines and deployment strategies Skills Spring Webflux,Spring Boot,Kafka
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France