Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Work Schedule Standard (Mon-Fri) Environmental Conditions Office Thermo Fisher Scientific Inc. is the world leader in serving science, with an annual revenue of approximately $40 billion. Our Mission is to enable our customers to make the world healthier, cleaner and safer. Whether our customers are accelerating life sciences research, solving sophisticated analytical challenges, growing productivity in their laboratories, improving patient health through diagnostics or the development and manufacture of life-changing therapies, we are here to support them. Our distributed team of more than 130,000 colleagues delivers an unrivalled combination of innovative technologies, purchasing convenience and pharmaceutical services through our industry-leading brands, including Thermo Scientific, Applied Biosystems, Invitrogen, Fisher Scientific, Unity Lab Services, Patheon and PPD. For more information, please visit www.thermofisher.com. About The Role We are seeking a versatile and highly skilled Software Development Test Engineer with 2–5 years of hands-on experience across DevOps, Github, Docker, Helm, K8, Scripting. This is a great opportunity to work on innovative systems involving AI/ML model serving (e.g., NVIDIA Triton), high-performance computing, and distributed services. You will be responsible for testing, and maintaining robust software solutions that work with hardware, cloud infrastructure, and services running at scale. Key Responsibilities Design, develop and maintain test environment for AI Deployment solutions. Integrate and optimize model inference using NVIDIA Triton running Inference Server. Work closely with hardware systems like Nvidia Orin/Jetson running using Linux to diagnose, fix, and optimize hardware-software interactions. Collaborate with multi-functional teams to define system architecture and deliver end-to-end solutions. Write clean, maintainable, and well-tested code following standard methodologies. Continuously supervise and improve system performance, reliability, and scalability. Required Skills And Qualifications 2–5 years of experience in DevOps, GitHub, Docker, Helm, K8, scripting. Experience in test automation, TDD/BDD and manual testing. ISTQB certification will be a plus. Experience developing and deploying microservices, particularly with gRPC. Solid understanding of CI/CD pipelines and DevOps standard methodologies. Comfortable working in Linux-based environments and interfacing with hardware components. Strong debugging, performance tuning, and hardware solving skills.
Posted 6 days ago
5.0 years
0 Lacs
India
On-site
Responsibilities: Build multi-agent systems capable of reasoning, tool use, and autonomous action Implement Model Context Protocol (MCP) strategies to manage complex, multi-source context Integrate third-party APIs (e.g., Crunchbase, PitchBook, CB Insights), scraping APIs, and data aggregators Develop browser-based agents enhanced with computer vision for dynamic research, scraping, and web interaction Optimize inference pipelines, task planning, and system performance Collaborate on architecture, prototyping, and iterative development Experiment with prompt chaining, tool calling, embeddings, and vector search Requirements 5+ years of experience in software engineering or AI/ML development Strong Python skills and experience with LangChain, LlamaIndex, or agentic frameworks Proven experience with multi-agent systems, tool calling, or task planning agents Familiarity with Model Context Protocol (MCP), Retrieval-Augmented Generation (RAG), and multi-modal context handling Experience with browser automation frameworks (e.g., Playwright, Puppeteer, Selenium) Cloud deployment and systems engineering experience (GCP, AWS, etc.) Self-starter attitude with strong product sense and iteration speed Bonus Points Experience with AutoGen, CrewAI, OpenAgents, or ReAct-style frameworks Background in building AI systems that blend structured and unstructured data Experience working in a fast-paced startup environment Previous startup or technical founding team experience
Posted 6 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary At Novartis, we are reimagining medicine by harnessing the power of data and AI. As a Senior Architect – AI Products supporting our Commercial function, you will drive the architectural strategy that enables seamless integration of data and AI products across omnichannel engagement, customer analytics, field operations, and real-world insights. You will work across commercial business domains, data platforms, and AI product teams to design scalable, interoperable, and compliant solutions that maximize the impact of data and advanced analytics on how we engage with healthcare professionals and patients. About The Role Position Title: Assoc. Dir. DDIT US&I AI Architect (Commercial) Location – Hyd-India Hybrid About The Role At Novartis, we are reimagining medicine by harnessing the power of data and AI. As a Senior Architect – AI Products supporting our Commercial function, you will drive the architectural strategy that enables seamless integration of data and AI products across omnichannel engagement, customer analytics, field operations, and real-world insights. You will work across commercial business domains, data platforms, and AI product teams to design scalable, interoperable, and compliant solutions that maximize the impact of data and advanced analytics on how we engage with healthcare professionals and patients. Your Responsibilities Include But Are Not Limited To: Commercial Architecture Strategy: Define and drive the reference architecture for commercial data and AI products, ensuring alignment with enterprise standards and business priorities. Cross-Product Integration: Architect how data products (e.g., HCP 360, engagement data platforms, real-world data assets) connect with AI products (e.g., field force recommendations, predictive models, generative AI copilots) and downstream tools. Modular, Scalable Design: Ensure architecture promotes reuse, scalability, and interoperability across multiple markets, brands, and data domains within the commercial landscape. Stakeholder Alignment: Partner with commercial product managers, data science teams, platform engineering, and global/local stakeholders to guide solution design, delivery, and lifecycle evolution. Data & AI Lifecycle Enablement: Support the full lifecycle of data and AI—from ingestion and transformation to model training, inference, and monitoring—within compliant and secure environments. Governance & Compliance: Ensure architecture aligns with GxP, data privacy, and commercial compliance requirements (e.g., consent management, data traceability). Innovation & Optimization: Recommend architectural improvements, modern technologies, and integration patterns to support personalization, omnichannel engagement, segmentation, targeting, and performance analytics. What You’ll Bring To The Role: Proven ability to lead cross-functional architecture efforts across business, data, and technology teams. Good understanding of security, compliance, and privacy regulations in a commercial pharma setting. Experience with pharmaceutical commercial ecosystems and data (e.g., IQVIA, Veeva, Symphony). Familiarity with customer data platforms (CDPs), identity resolution, and marketing automation tools. Desirable Requirements: Bachelor's or master’s degree in computer science, Engineering, Data Science, or a related field. 10+ years of experience in enterprise or solution architecture, with significant experience in commercial functions (preferably in pharma or life sciences). Strong background in data platforms, pipelines, and governance (e.g., Snowflake, Databricks, CDP, Salesforce integration). Hands-on experience integrating solutions across Martech, CRM, and omnichannel systems. Strong knowledge of AI/ML architectures, particularly those supporting commercial use cases (recommendation engines, predictive analytics, NLP, LLMs). Exposure to GenAI applications in commercial (e.g., content generation, intelligent assistants). Understanding of global-to-local deployment patterns and data sharing requirements Commitment To Diversity & Inclusion: Novartis embraces diversity, equal opportunity, and inclusion. We are committed to building diverse teams, representative of the patients and communities we serve, and we strive to create an inclusive workplace that cultivates bold innovation through collaboration and empowers our people to unleash their full potential. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards
Posted 6 days ago
14.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer. Product at Innovaccer Our product team is a collaborative group of talented professionals who transform ideas into real-life solutions. They guide the creation, development, and launch of new products, ensuring alignment with our overall business strategy. Additionally, we are leveraging AI across all our solutions, revolutionizing healthcare and shaping the future to make a meaningful impact on the world.You'll have the opportunity to build the foundation for intelligence in healthcare, shaping how LLMs and AI agents power clinical decision-making, operations, and patient engagement — safely, responsibly, and at scale.Join a team that values speed, openness, and ownership — and be part of the company that's changing healthcare through data and AI. About The Role We are looking for a visionary Director of Product Management-AI Platform to define, shape, and drive the strategy for Innovaccer's AI Platform-a foundational layer that will power intelligent applications,LLM-based agents, and precision workflows across the healthcare ecosystem.You will lead the development of modular infrastructure and frameworks that enable safe,scalable, and context-aware AI-including SLMs, LLMs, and autonomous agents. Your leadership will guide how we abstract complexity, orchestrate models,and ensure responsible deployment across real-world clinical and operational settings.You'll sit at the intersection of data, ML, engineering, and healthcare delivery-translating cutting-edge AI into actionable, productized value. A Day in the Life Own the Vision Define and communicate the long-term product vision and roadmap for the AI Platform Align product strategy with Innovaccer's AI-first approach and broader business goals Drive clarity around the role of LLMs, agent frameworks, and contextual AI within Innovaccer's platform ecosystem Build the Foundation Architect and deliver core platform capabilities such as: Model Context Protocol layers for injecting structured healthcare data into AI workflows Agent frameworks with planning, memory, and tools integration Evaluation pipelines for safety, fairness, and domain alignment Model orchestration, fine-tuning, and inference APIs Lead cross-functional delivery with engineering, MLOps, and design teams to bring products from concept to scale Lead in the AI Community Represent Innovaccer in industry conversations on LLMs, RAG, agent safety, and applied healthcare AI Stay ahead of advancements in GenAI, healthcare-specific LLMs, multimodal models, and regulatory AI frameworks (e.g., HIPAA, FDA, EU AI Act) Champion responsible AI and standardization efforts through internal policy and external partnership Enable Internal & External Builders Deliver reusable toolkits, SDKs, and APIs to enable internal teams and customers to build healthcare-native AI products Partner with clinical informatics, data science, and platform engineering to ensure contextual accuracy and safety & collaborate with customer-facing teams to drive adoption and value realization What You Need 14+ years of product management experience, with 5+ years in AI/ML platform or data infrastructure products Proven experience building and scaling platforms for LLMs, SLMs, AI agents, or multi-modal AI applications Deep understanding of Agent architectures and orchestration,Retrieval-augmented generation (RAG),Vector stores, embeddings, prompt engineering,Model governance and evaluation Hands-on familiarity with platforms like LangChain, Hugging Face, Pinecone, Weaviate, MLflow, and cloud AI stacks (AWS, GCP, Azure) Ability to communicate complex concepts to both technical and business audiences Experience working with healthcare data (FHIR, HL7, CCD, EHRs, claims, etc.) Knowledge of AI safety, explainability, and compliance in regulated environments Published thought leadership or contributions to AI communities, open-source, or technical working groups We offer competitive benefits to set you up for success in and outside of work. Here's What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where And How We Work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team.Innovaccer is an equal opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure— extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube , Glassdoor , LinkedIn , Instagram , and the Web .
Posted 1 week ago
0 years
5 - 8 Lacs
Noida
On-site
Senior Gen AI Engineer Job Description Brightly Software is seeking an experienced candidate to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Lead the evaluation and selection of foundation models and vector databases based on performance and business needs Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Guide the design of multi-step RAG, agentic, or tool-augmented workflows Implement governance, safety layers, and responsible AI practices (e.g., guardrails, moderation, auditability) Mentor junior engineers and review GenAI design and implementation plans Drive experimentation, benchmarking, and continuous improvement of GenAI capabilities Collaborate with leadership to align GenAI initiatives with product and business strategy Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS Opensearch Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate techn
Posted 1 week ago
2.0 years
5 - 8 Lacs
Noida
On-site
Gen AI Engineer Job Description Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS OpenSearch Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate technical concepts clearly to cross-functional and non-technical stakeholders Thrive in a fast-paced, lean environment and contribute to scalable GenAI system design Qualifications Bachelor’s degree is required 2-4 years of experience of total experience with a strong focus on AI and ML and 1+ years in core GenAI Engineering Demonstrated expertise in working with large language models (LLMs) and generative AI systems, including both text-based and multimodal models. Strong programming skills in Python, including proficiency with data science libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, and/or PyTorch. Familiarity with MLOps principles and tools for automating and streamlining the ML lifecycle. Experience working with agentic AI. Capable of building Retrieval-Augmented Generation (RAG) pipelines leveraging vector stores like Pinecone, Chroma, or FAISS. Strong programming skills in Python, with experience using leading AI/ML libraries such as Hugging Face Transformers and LangChain. Practical experience in working with vector databases and embedding methodologies for efficient information retrieval. Possess experience in developing and exposing API endpoints for accessing AI model capabilities using frameworks like FastAPI. Knowledgeable in prompt engineering techniques, including prompt chaining and performance evaluation strategies. Solid grasp of natural language processing (NLP) fundamentals and transformer-based model architectures. Experience in deploying machine learning models to cloud platforms (preferably AWS) and containerized environments using Docker or Kubernetes. Skilled in fine-tuning and assessing open-source models using methods such as LoRA, PEFT, and supervised training. Strong communication skills with the ability to convey complex technical concepts to non-technical stakeholders. Able to operate successfully in a lean, fast-paced organization, and to create a vision and organization that can scale quickly Senior Gen AI Engineer
Posted 1 week ago
2.0 years
30 Lacs
India
Remote
Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: GRPO, high-availability, Trl, Generative AI, LLM, Kubernetes, Machine Learning, Python Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
3.0 years
0 Lacs
India
On-site
What You’ll Do ● Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills ● Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. ● Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. ● Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. ● Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput ● Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it ● Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need ● 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) ● Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. ● 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines ● Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. ● Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. ● Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. ● Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. ● Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). ● Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. ● A customer-centric approach — you think about how your work affects end users and product experience, not just model performance ● A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed ● The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket
Posted 1 week ago
2.0 years
30 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: GRPO, high-availability, Trl, Generative AI, LLM, Kubernetes, Machine Learning, Python Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Role Overview: As a Python Developer Intern at Arcitech AI, you will play a crucial role in our advancements in software development, AI, and integrative solutions. This entry-level position offers the opportunity to work on cutting-edge projects and contribute to the growth of the company. You will be challenged to develop Python applications, collaborate with a dynamic team, and optimize code performance, all while gaining valuable experience in the industry. Responsibilities Assist in designing, developing, and maintaining Python applications focused on backend and AI/ML components under senior engineer guidance. Help build and consume RESTful or GraphQL APIs integrating AI models and backend services, following established best practices. Containerize microservices (including AI workloads) using Docker and support Kubernetes deployment and management tasks. Implement and monitor background jobs with Celery (e.g., data processing, model training/inference), including retries and basic alerting. Integrate third-party services and AI tools via webhooks and APIs (e.g., Stripe, Razorpay, external AI providers) in collaboration with the team. Set up simple WebSocket consumers using Django Channels for real-time AI-driven and backend features. Aid in configuring AWS cloud infrastructure (EC2, S3, RDS) as code, assist with backups, monitoring via CloudWatch, and support AI workload deployments. Write unit and integration tests using pytest or unittest to maintain ≥ 80% coverage across backend and AI codebases. Follow Git branching strategies and contribute to CI/CD pipeline maintenance and automation for backend and AI services. Participate actively in daily tech talks, knowledge-sharing sessions, code reviews, and team collaboration focused on backend and AI development. Assist with implementing AI agent workflows and document retrieval pipelines using LangChain and LlamaIndex (GPT Index) frameworks. Maintain clear and up-to-date documentation of code, experiments, and processes. Participate in Agile practices including sprint planning, stand-ups, and retrospectives. Demonstrate basic debugging and troubleshooting skills using Python tools and log analysis. Handle simple data manipulation tasks involving CSV, JSON, or similar formats. Follow secure coding best practices and be mindful of data privacy and compliance. Exhibit strong communication skills, a proactive learning mindset, and openness to feedback. Required Qualifications Currently pursuing a Bachelor’s degree in Computer Science, Engineering, Data Science, or related scientific fields. Solid foundation in Python programming with familiarity in common libraries (NumPy, pandas, etc.). Basic understanding of RESTful/GraphQL API design and consumption. Exposure to Docker and at least one cloud platform (AWS preferred). Experience or willingness to learn test-driven development using pytest or unittest. Comfortable with Git workflows and CI/CD tools. Strong problem-solving aptitude and effective communication skills. Preferred (But Not Required) Hands-on experience or coursework with AI/ML frameworks such as TensorFlow, PyTorch, or Keras. Prior exposure to Django web framework and real-time WebSocket development (Django Channels). Familiarity with LangChain and LlamaIndex (GPT Index) for building AI agents and retrieval-augmented generation workflows. Understanding of machine learning fundamentals (neural networks, computer vision, NLP). Background in data analysis, statistics, or applied mathematics.
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Data Scientist Associate Senior within the Asset and Wealth Management team at JPMorgan Chase, you will play a key role as an experienced member of our Data Science Team. You are responsible for addressing business problems through data analysis, developing models, and deploying these models to production environments on AWS or Azure. Job Responsibilities Collaborate with all of JPMorgan’s lines of business and functions to delivery software solutions. Develop and experiment high quality machine learning models, services and platforms to make huge technology and business impact. Design and implement highly scalable and reliable data processing pipelines and perform analysis and insights to drive and optimize business result. Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience BE/B.Tech, ME/MS or PhD degree in Computer Science, Statistics, Mathematics or Machine learning related field. Solid programming skills with Python Deep knowledge in Data structures, Algorithms, Machine Learning, Data Mining, Information Retrieval, Statistics. Expert in at least one of the following areas: Natural Language Processing, Computer Vision, Speech Recognition, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis. Experience in using GenAI (OpenAI or other models ) to solve business problem. Knowledge of machine learning frameworks: Tensorflow and Pytorch Experience in training/ inference/ML ops on public cloud (AWS/GCP/Azure) Strong analytical and critical thinking skills. Preferred Qualifications, Capabilities, And Skills Knowledge of Asset and Wealth management business is added advantage . ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals.
Posted 1 week ago
0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Company Description Stockwell Solar Services Pvt Ltd (SSSPL), founded by IITians in 2017, specialises in Solar and BESS OPEX/RESCO and CAPEX models. We currently operate or are constructing over 100 MW of solar assets across 300+ sites, with 600 MW of projects under execution, showcasing our commitment to sustainable energy. Join us to lead the transition to clean energy. Role Description This is an on-site, full-time role for a Strategic Role-CEO's Office at Stockwell Solar Services Pvt Ltd, located in Jaipur. The role involves coordinating with various departments, preparing reports, and supporting strategic initiatives led by the CEO. Role : - Assist & coordinate the strategy planning exercise. - To monitor tasks delegated by the CEO to ensure that the task is achieved by the agreed deadlines. - Coordinating cross-functional teams to ensure project deliverables. - External & Internal interface on behalf of the CEO. - Helping in business presentations & tie up with internal & external stakeholders. - Business data analysis. - Assist the CEO with inputs and data required for making strategic decisions. Qualifications -The ideal candidate should have abilities in business acumen, strategy formulation and P&L understanding; data comprehension & inference development; project management & team working. -Experience in the Solar/Power Industry is a Plus.
Posted 1 week ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About The Role Gartner is looking for passionate and motivated Lead Data Engineers who are excited to foray into new technologies and help build / maintain data driven components for realizing business needs. This role is in Gartner Product Delivery Organization (PDO). PDO Data Engineering teams are high velocity agile teams responsible for developing and maintaining components crucial to customer-facing channels, data science teams, and reporting & analysis. These components include but are not limited to Spark jobs, REST APIs, AI/ML model training & inference, MLOps / devops pipelines, data transformation & quality pipelines, data lake & data catalogs, data streams etc. What You Will Do Ability to lead and execute a mix of small/medium sized projects simultaneously Owns success, takes responsibility for successful delivery of solutions from development to production. Mentor and guide team members Explore and create POC of new technologies / frameworks Should have significant experience working directly with Business users in problem solving Excellent Communication and Prioritization skills. Should be able to interact and coordinate well with other developers / teams to resolve operational issues. Should be self-motivated and a fast learner to ramp up quickly with a fair amount of help from team members. Must be able to estimate development tasks with high accuracy and deliver on time with high quality while following coding guidelines and best practices. Identify systemic operational issues and resolve them. What you will need 6+ years of post-college experience in data engineering, API development or related fields Must have Demonstrated experience in data engineering, data science, or machine learning. Experience working with data platforms – building and maintaining ETL flows and data stores for ML and reporting applications. Skills to transform data, prepare it for analysis, and analyze it – including structured and unstructured data Ability to transform business needs into technical solutions Demonstrated experience of cloud platforms (AWS, Azure, GCP, etc.) Experience with languages such as Python, Java, SQL Experience with tools such as Apache Spark, Databricks, AWS EMR Experience with Kanban or Agile Scrum development Experience with REST API development Experience collaboration tools such as Git, Jenkins, Jira, Confluence Experience with Data modeling and Database schema / table design Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:99715 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 1 week ago
7.5 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Operations Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Machine Learning Engineer/MLOps Expert, you will engage in the operationalization of Machine Learning Models that leverage artificial intelligence tools and cloud AI services. Your typical day will involve designing and implementing production-ready ML system, ensuring high-quality standards are met. Roles & Responsibilities: - Continuously evaluate and improve existing processes to enhance efficiency. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team skills and capabilities. - Monitor project progress and ensure alignment with strategic goals. Professional & Technical Skills: - ML Pipeline Development: Design, build, and maintain scalable pipelines for model training to support our AI initiatives. - Model Deployment & Serving: Deploy machine learning models as robust, secure services – containerize models with Docker and serve them via FastAPI on AWS – ensuring low-latency predictions for marketing applications. Manage Batch inference and Realtime inference. - CI/CD Automation: Implement continuous integration and delivery (CI/CD) pipelines for ML projects. Automate testing, model validation, and deployment workflows using tools like GitHub Actions to accelerate delivery. - Model Lifecycle Management: Orchestrate the end-to-end ML lifecycle, including versioning, packaging, and registering models. Maintain a model repository/registry (MLflow or similar) for reproducibility and governance from experimentation through production. Experience on MLFlow and Airflow is mandatory - Monitoring & Optimization: Monitor model performance, data drift, and system health in production. Set up alerts and dashboards and proactively initiate model retraining or tuning to sustain accuracy and efficiency over time. - Must To Have Skills: Proficiency in Machine Learning Operations. - Strong understanding of cloud-based AI services and deployment strategies. - Should have Multi Cloud skills - Experience with Machine learning frameworks - Ability to implement and optimize machine learning models for production environments. Additional Information: - The candidate should have minimum 7.5 years of experience in Machine Learning Operations. - This position is based at our Bengaluru office. - A 15 years full time education is required.
Posted 1 week ago
5.0 years
0 Lacs
India
Remote
Experience: 5+ Years (Experience in Data Science) Title: Martech ML Engineer (THIS ROLE IS INDIVIDUAL CONTRIBUTER ROLE) Working hours: Flexible Hours (Presence from 9 PM IST to 12 AM IST is a must) Masscomcorp is seeking an experienced ML Engineer with a strong background in User Acquisition (UA) for Mobile Gaming and AdTech . The ideal candidate will leverage data-driven insights to optimize user acquisition strategies, enhance campaign performance, and maximize return on investment (ROI). You should have hands-on experience in Python, SQL, and statistical modelling , along with a deep understanding of marketing analytics, attribution modelling, and programmatic advertising. Here’s a quick breakdown of the key responsibilities and skills for this role: Key Responsibilities: Analyze large datasets from multiple sources (e.g., mobile games, ad networks, MMPs) to drive UA strategy and improve campaign effectiveness. Develop predictive models and A/B testing frameworks to optimize ad spend, bidding strategies, and targeting. Implement LTV (Lifetime Value) models, and cohort analysis to inform marketing decisions. Work closely with marketing, UA, product, and engineering teams to translate business questions into analytical solutions. Use SQL to extract and manipulate data from databases and create dashboards to track key UA performance metrics. Utilize Python for automation, machine learning, and data processing to improve efficiency in UA campaigns. Collaborate with AdTech partners and MMPs (e.g., Appsflyer, Adjust) to ensure accurate attribution tracking and measurement. Stay updated on industry trends in AdTech, gaming UA, and programmatic advertising to recommend new strategies. Interact and collaborate with data engineers, Business stakeholders as and when required. What to Expect: This is an individual contributor role, focused on hands-on work. The role involves close collaboration with data science and ML teams, as well as development of in-house systems. Required Technical Skills: At least 3+ years of experience in User Acquisition, AdTech, or Gaming analytics. Hands-on experience with AdTech platforms, MMPs (Mobile Measurement Partners), and UA tools. Demonstrate an a relevant project implementation in User Acquisition in Mobile Marketing/AdTech Space Experience with marketing analytics, campaign performance optimization, and attribution modeling. Familiarity with A/B testing methodologies, statistical significance, and causal inference techniques. Ability to use statistics to understand behavior of systems and/or players. Ability to communicate complex data insights to non-technical stakeholders. Must be thorough and experienced with programming in Python and SQL. Prior experience in Apache Spark with ML is a plus Team player with excellent organizational, communication and interpersonal skills. Why Masscom: Generous paid time off (PTO), vacation, and holidays Permanent Work from Home Flexible working hours Group Health Insurance (Family Floater) 5 days a week
Posted 1 week ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
About The Role As a Machine Learning Operation Engineer, you will work on deploying, scaling, and optimizing backend algorithms, robust and scalable data ingestion pipelines, machine learning services, and data platforms to support analysis on vast amounts of text and analytics data. You will apply your technical knowledge and Big Data analytics on Onclusives billions of online content data points to solve challenging marketing problems. ML Ops Engineers are integral to the success of Onclusive. Your Responsibilities Design and build scalable machine learning services and data platforms. Utilize benchmarks, metrics, and monitoring to measure and improve services. Manage system currently processing data on the order of tens of millions of jobs per day. Research, design, implement and validate cutting-edge algorithms to analyze diverse sources of data to achieve targeted outcomes. Work with data scientists and machine learning engineers to implement ML, AI, and NLP techniques for article analysis and attribution. Deploy, manage, and optimize inference services on autoscaling fleets with GPUs and specialized inference you are : A degree (BS, MS, or Ph.D.) in Computer Science or a related field, accompanied by hands-on experience. Proficiency in Python, showcasing your understanding of Object-Oriented Programming (OOP) principles. Solid knowledge of containerisation (Docker preferable). Experience working with Kubernetes. Experience in Infrastructure as Code (IAC) for AWS, with a preference for Terraform. Knowledge of Version Control Systems (VCS), particularly Git and GitHub, alongside familiarity with CI/CD, preferably GitHub Actions. Understanding of release management, embracing rigorous testing, validation, and quality assurance protocols. Good understanding of ML principles. Data Engineering experience (airflow, dbt, meltano) is highly desired. Exposure to deep learning tech-stacks like Torch / Tensorflow, and we can offer : We are a global fast growing company which offers a variety of opportunities for you to develop your skill set and career. In exchange for your contribution, we can offer you : Competitive salary and benefits. Hybrid working in a team that is passionate about the work we deliver and supporting the development of those that we work with. A company focus on wellbeing and work life balance including initiatives such as flexible working and mental health support. We want the best talent available, regardless of race, religion, gender, gender reassignment, sexual orientation, marital status, pregnancy, disability or age. (ref:hirist.tech)
Posted 1 week ago
4.0 years
0 Lacs
India
Remote
About Us We’re an early-stage startup building LLM-native products that turn unstructured documents into intelligent, usable insights. We work with RAG pipelines, multi-cloud LLMs, and fast data processing — and we’re looking for someone who can build, deploy, and own these systems end-to-end. Key Responsibilities: RAG Application Development: Design and build end-to-end Retrieval-Augmented Generation (RAG) pipelines using LLMs deployed on Vertex AI and AWS Bedrock , integrated with Quadrant for vector search. OCR & Multimodal Data Extraction: Use OCR tools (e.g., Textract) and vision-language models (VLMs) to extract structured and unstructured data from PDFs, images, and multimodal content. LLM Orchestration & Agent Design: Build and optimize workflows using LangChain , LlamaIndex , and custom agent frameworks. Implement autonomous task execution using agent strategies like ReAct , Function Calling , and tool-use APIs . API & Streaming Interfaces: Build and expose production-ready APIs (e.g., with FastAPI) for LLM services, and implement streaming outputs for real-time response generation and latency optimization. Data Pipelines & Retrieval: Develop pipelines for ingestion, chunking, embedding, and storage using Quadrant and PostgreSQL , applying hybrid retrieval techniques (dense + keyword search), rerankers, GraphRAG. Serverless AI Workflows: Deploy serverless ML components (e.g., AWS Lambda, GCP Cloud Functions) for scalable inference and data processing. MLOps & Model Evaluation: Deploy, monitor, and iterate on AI systems with lightweight MLOps workflows (Docker, MLflow, CI/CD). Benchmark and evaluate embeddings, retrieval strategies, and model performance. Qualifications: Strong Python development skills (must-have). LLMs: Claude and Gemini models Experience building AI agents and LLM-powered reasoning pipelines. Deep understanding of embeddings, vector search, and hybrid retrieval techniques. Experience with Quadrant DB Experience designing multi-step task automation and execution chains. Streaming: Ability to implement and debug LLM streaming and async flows Knowledge of memory and context management strategies for LLM agents (e.g., vector memory, scratchpad memory, episodic memory). Experience with AWS Lambda for serverless AI workflows and API integrations. Bonus: LLM fine-tuning, multimodal data processing, knowledge graph integration, or advanced AI planning techniques. Prior experience at startups only ( not IT services or Enterprises) and short notice period Who You Are 2–4 years of real-world AI/ML experience, ideally with production LLM apps Startup-ready: fast, hands-on, comfortable with ambiguity Clear communicator who can take ownership and push features end-to-end Available to join immediately Why Join Us? Founding-level role with high ownership Build systems from scratch using the latest AI stack Fully remote, async-friendly, fast-paced team
Posted 1 week ago
6.0 years
0 Lacs
India
Remote
Join Tether and Shape the Future of Digital Finance At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction. Innovate with Tether Tether Finance : Our innovative product suite features the world’s most trusted stablecoin, USDT, relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services. But that’s just the beginning: Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities. Tether Data : Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET, our flagship app that redefines secure and private data sharing. Tether Education: Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity. Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways. Why Join Us? Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry. If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you. Are you ready to be part of the future? About the job: As a Senior SDK Developer, you will be a part of the team that works on the development of the new and cutting edge Tether AI SDK. Developer-Facing SDKs & APIs Tether is committed to delivering world-class developer experiences through robust and intuitive SDKs. You will design, build, and maintain modular, versioned SDKs that abstract complex backend logic into clean, usable interfaces — enabling seamless integration with Tether’s platform across various client environments. Performance & Reliability at Scale SDKs must be fast, lightweight, and reliable — even when performing heavy and demanding operations. You’ll design resilient logic (retry policies, offline handling, batching) and contribute to the scalability of platform-facing interfaces and services powering the SDKs. Security-First Engineering You’ll embed best-in-class security practices directly into the SDK architecture, including secure communication, encrypted storage and rigorous input validation. Your work will help ensure safe integration pathways for all developers working with the Tether ecosystem. +6 years of experience working with Nodejs/JavaScript in production environments. Proven track record in designing and maintaining developer-facing SDKs (npm packages, API clients, or instrumentation libraries) Strong understanding of modular architecture, versioning strategies, and semantic API design Have actively participated in the development of a complex platform Ability to quickly learn new technologies Good understanding of security practices Nice to have Familiar with Peer-to-Peer technologies (Kademlia, bittorent, libp2p) Comfortable with high-availability concepts Rust or C++ skills are a plus Familiar with AI domain applications (RAG, Agents, Inference, AI SDKs) Familiarity with real-time data delivery (NodeJS/other streaming)
Posted 1 week ago
3.0 - 4.0 years
10 - 15 Lacs
Hyderabad, Telangana, India
On-site
At Livello we building machine-learning-based demand forecasting tools as well as computer-vision-based multi-camera product recognition solutions that detects people and products to track the inserted/removed items on shelves based on the hand movement of users. We are building models to determine real-time inventory levels, user behaviour as well as predicting how much of each product needs to be reordered so that the right products are delivered to the right locations at the right time, to fulfil customer demand. Responsibilities Lead the CV and DS Team Work in the area of Computer Vision and Machine Learning, with focus on product (primarily food) and people recognition (position, movement, age, gender, DSGVO compliant). Your work will include formulation and development of a Machine Learning models to solve the underlying problem. You help build our smart supply chain system, keep up to date with the latest algorithmic improvements in forecasting and predictive areas, challenge the status quo Statistical data modelling and machine learning research. Conceptualize, implement and evaluate algorithmic solutions for supply forecasting, inventory optimization, predicting sales, and automating business processes Conduct applied research to model complex dependencies, statistical inference and predictive modelling Technological conception, design and implementation of new features Quality assurance of the software through planning, creation and execution of tests Work with a cross-functional team to define, build, test, and deploy applications Requirements Master/PHD in Mathematics, Statistics, Engineering, Econometrics, Computer Science or any related fields. 3-4 years of experience with computer vision and data science. Relevant Data Science experience, deep technical background in applied data science (machine learning algorithms, statistical analysis, predictive modelling, forecasting, Bayesian methods, optimization techniques). Experience building production-quality and well-engineered Computer Vision and Data Science products. Experience in image processing, algorithms and neural networks. Knowledge of the tools, libraries and cloud services for Data Science. Ideally Google Cloud Platform Solid Python engineering skills and experience with Python, Tensorflow, Docker Cooperative and independent work, analytical mindset, and willingness to take responsibility Fluency in English, both written and spoken. Skills:- Natural Language Processing (NLP), Computer Vision, TensorFlow, Docker, Forecasting, Predictive modelling, Image Processing, Algorithms and Machine Learning (ML)
Posted 1 week ago
0 years
15 - 25 Lacs
Bengaluru, Karnataka, India
On-site
Manage total pricing procedure and ensure timely response to market conditions. Support developing the pricing strategy formulation to remain competitive and enhance profitability. Analyze competition and industry trend. Develop pricing strategy across various product lines to position the products based on value and competitive situation. Develop Methodology for calculating List Price, Price Floor, Price ceiling for various product lines within various market segments in relation to the value. Maintain corporate price list and periodically update appropriately Develop tools for estimating cost for quotes for new products. Transition the organization from cost plus pricing model to value pricing model Develop value pricing model and implement it for all new products. Define approval standards and processes Perform financial evaluation to assess pricing action effectiveness Lead the Price increase process/change management process for the organization. Work with sales, management, and product managers to implement the Price changes into the market and to product Business cases for new pricing proposals Bespoke pricing proposals with authority matrix and compliance Conduct training on pricing to sales teams Propose new models and product features to improve gross margin and increase revenue Conduct field research including competition analysis, industry analysis , trend tracking and develop Insights based on inference Develop a methodology to identify margin leakages and recommend approaches of improvement Perform partnering with buyers, product managers and sales department to ensure integrated profit maximizing approach to market Analyse financial impact of price approach in view of overall history as well as profitability of customer Performance Indicators Top line revenue growth Improved Margins Average revenue per contract Customer acquisition cost Lifetime value 8 yrs overall experience with at least 3 years in a similar role Graduation in a relevant stream In depth knowledge of pricing strategies, processes, initiatives and creating pricing process documentation. Experience in SaaS pricing models and Value based pricing Proficiency in Data Mining Good understanding of the business model Numerical data Analytical mind with a strategic ability Strong attention to detail. Understanding of Financial Statements Excellent communication, negotiation and stakeholder management skills Skills:- Pricing Strategy, Pricing management and Revenue growth
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Location: Remote Job Summary : We are looking for a highly motivated AI/ML Engineer with hands-on experience in building applications using LangChain and large language models (LLMs). The ideal candidate will have a strong foundation in machine learning, natural language processing (NLP), and prompt engineering, along with a passion for solving real-world problems using cutting-edge AI technologies. Key Responsibilities : - Design, develop, and deploy AI-powered applications using LangChain and LLM frameworks. - Build and optimize prompt chains, memory modules, and tools for conversational agents. - Integrate third-party APIs, vector databases (like Pinecone, FAISS, or Weaviate), and knowledge bases into AI workflows. - Train, fine-tune, or fine-control LLMs for custom use cases. - Collaborate with product, backend, and data science teams to deliver AI-driven solutions. - Implement evaluation metrics and testing frameworks for model performance and response quality. - Stay current with advancements in generative AI, LLMs, and LangChain ecosystem. Requirements : - Bachelors or Masters degree in Computer Science, Artificial Intelligence, or a related field. - 3+ years of experience in machine learning, NLP, or related AI/ML fields. - Proficiency in Python and libraries such as Hugging Face Transformers, LangChain, OpenAI, etc. - Experience with vector stores and retrieval-augmented generation (RAG). - Strong understanding of LLM architecture, prompt engineering, and inference pipelines. - Familiarity with cloud platforms (AWS, GCP, Azure) and MLOps workflows.
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who We Are Ema is building the next generation AI technology to empower every employee in the enterprise to be their most creative and productive. Our proprietary tech allows enterprises to delegate most repetitive tasks to Ema, the AI employee. We are founded by ex-Google, Coinbase, Okta executives and serial entrepreneurs. We’ve raised capital from notable investors such as Accel Partners, Naspers, Section32 and a host of prominent Silicon Valley Angels including Sheryl Sandberg (Facebook/Google), Divesh Makan (Iconiq Capital), Jerry Yang (Yahoo), Dustin Moskovitz (Facebook/Asana), David Baszucki (Roblox CEO) and Gokul Rajaram (Doordash, Square, Google). Our team is a powerhouse of talent, comprising engineers from leading tech companies like Google, Microsoft Research, Facebook, Square/Block, and Coinbase. All our team members hail from top-tier educational institutions such as Stanford, MIT, UC Berkeley, CMU and Indian Institute of Technology. We’re well funded by the top investors and angels in the world. Ema is based in Silicon Valley and Bangalore, India. This will be a hybrid role where we expect employees to work from office three days a week. Who You Are We're looking for an innovative and passionate Machine Learning Engineers to join our team. You are someone who loves solving complex problems, enjoys the challenges of working with huge data sets, and has a knack for turning theoretical concepts into practical, scalable solutions. You are a strong team player but also thrive in autonomous environments where your ideas can make a significant impact. You love utilizing machine learning techniques to push the boundaries of what is possible within the realm of Natural Language Processing, Information Retrieval and related Machine Learning technologies. Most importantly, you are excited to be part of a mission-oriented high-growth startup that can create a lasting impact. You Will Conceptualize, develop, and deploy machine learning models that underpin our NLP, retrieval, ranking, reasoning, dialog and code-generation systems. Implement advanced machine learning algorithms, such as Transformer-based models, reinforcement learning, ensemble learning, and agent-based systems to continually improve the performance of our AI systems. Lead the processing and analysis of large, complex datasets (structured, semi-structured, and unstructured), and use your findings to inform the development of our models. Work across the complete lifecycle of ML model development, including problem definition, data exploration, feature engineering, model training, validation, and deployment. Implement A/B testing and other statistical methods to validate the effectiveness of models. Ensure the integrity and robustness of ML solutions by developing automated testing and validation processes. Clearly communicate the technical workings and benefits of ML models to both technical and non-technical stakeholders, facilitating understanding and adoption. Ideally, You'd Have A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. Proven industry experience in building and deploying production-level machine learning models. Deep understanding and practical experience with NLP techniques and frameworks, including training and inference of large language models. Deep understanding of any of retrieval, ranking, reinforcement learning, and agent-based systems and experience in how to build them for large systems. Proficiency in Python and experience with ML libraries such as TensorFlow or PyTorch. Excellent skills in data processing (SQL, ETL, data warehousing) and experience working with large-scale data systems. Experience with machine learning model lifecycle management tools, and an understanding of MLOps principles and best practices. Familiarity with cloud platforms like GCP or Azure. Familiarity with the latest industry and academic trends in machine learning and AI, and the ability to apply this knowledge to practical projects. Good understanding of software development principles, data structures, and algorithms. Excellent problem-solving skills, attention to detail, and a strong capacity for logical thinking. The ability to work collaboratively in an extremely fast-paced, startup environment. Ema Unlimited is an equal opportunity employer and is committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity, or genetics.
Posted 1 week ago
6.0 years
0 Lacs
India
Remote
Join Tether and Shape the Future of Digital Finance At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction. Innovate with Tether Tether Finance : Our innovative product suite features the world’s most trusted stablecoin, USDT, relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services. But that’s just the beginning: Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities. Tether Data : Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET, our flagship app that redefines secure and private data sharing. Tether Education: Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity. Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways. Why Join Us? Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry. If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you. Are you ready to be part of the future? About the job: As a Senior Software Developer, you will be a part of the team that building desktop and mobile AI apps on top of new and cutting edge Tether SDK. Responsibilities: AI-Driven Desktop Integration You will develop and maintain backend services and APIs that power AI-enhanced desktop applications. These services support intelligent features like local inference, contextual awareness, and model interaction, tailored specifically for Electron-based or hybrid clients. Platform-Aware API Design Collaborating closely with desktop and React Native teams, you will shape API contracts that reflect platform constraints and performance considerations — ensuring native-like responsiveness and cross-platform consistency. Scalable Model Invocation & Resource Management You’ll contribute to backend services that handle concurrent model invocations, manage GPU/CPU workloads, and intelligently queue or throttle requests based on system constraints — ensuring smooth AI on-device performance. +6 years of experience working with Nodejs/JavaScript. Experience with Desktop app development (Electron, Tauri, other) Experience working with React Native or bridging backend systems into mobile/desktop hybrid stacks Experience optimizing performance and resource usage on desktop/mobile clients Have actively participated in the development of a complex platform Ability to quickly learn new technologies Good understanding of security practices Nice to have Familiarity with secure inter-process communication Familiar with Peer-to-Peer technologies (Kademlia, bittorent, libp2p) C++/Swift/Kotlin skills are a plus Familiar with AI/Agentic domain applications (RAG, AI SDKs) Familiarity with real-time data delivery (NodeJS/other streaming)
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Senior Gen AI Engineer Job Description Brightly Software is seeking an experienced candidate to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Lead the evaluation and selection of foundation models and vector databases based on performance and business needs Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Guide the design of multi-step RAG, agentic, or tool-augmented workflows Implement governance, safety layers, and responsible AI practices (e.g., guardrails, moderation, auditability) Mentor junior engineers and review GenAI design and implementation plans Drive experimentation, benchmarking, and continuous improvement of GenAI capabilities Collaborate with leadership to align GenAI initiatives with product and business strategy Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS Opensearch Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate techn
Posted 1 week ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Gen AI Engineer Job Description Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS OpenSearch Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate technical concepts clearly to cross-functional and non-technical stakeholders Thrive in a fast-paced, lean environment and contribute to scalable GenAI system design Qualifications Bachelor’s degree is required 2-4 years of experience of total experience with a strong focus on AI and ML and 1+ years in core GenAI Engineering Demonstrated expertise in working with large language models (LLMs) and generative AI systems, including both text-based and multimodal models. Strong programming skills in Python, including proficiency with data science libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, and/or PyTorch. Familiarity with MLOps principles and tools for automating and streamlining the ML lifecycle. Experience working with agentic AI. Capable of building Retrieval-Augmented Generation (RAG) pipelines leveraging vector stores like Pinecone, Chroma, or FAISS. Strong programming skills in Python, with experience using leading AI/ML libraries such as Hugging Face Transformers and LangChain. Practical experience in working with vector databases and embedding methodologies for efficient information retrieval. Possess experience in developing and exposing API endpoints for accessing AI model capabilities using frameworks like FastAPI. Knowledgeable in prompt engineering techniques, including prompt chaining and performance evaluation strategies. Solid grasp of natural language processing (NLP) fundamentals and transformer-based model architectures. Experience in deploying machine learning models to cloud platforms (preferably AWS) and containerized environments using Docker or Kubernetes. Skilled in fine-tuning and assessing open-source models using methods such as LoRA, PEFT, and supervised training. Strong communication skills with the ability to convey complex technical concepts to non-technical stakeholders. Able to operate successfully in a lean, fast-paced organization, and to create a vision and organization that can scale quickly Senior Gen AI Engineer
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane