Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
1 - 3 Lacs
India
On-site
Join Our Team – We're Hiring! We are building and scaling a next-generation cross-platform application for Android, iOS, and Web in the LegalTech space . Our tech stack includes Ionic + Vue 3 on the frontend and Node.js + MySQL on the backend, all securely deployed on optimized Ubuntu servers . We're seeking skilled, motivated developers to join us in shaping the future of legal services. Project Overview We are developing Lawfy , a full-scale, cross-platform legal services app. The platform aims to redefine how legal solutions are delivered digitally. You will be part of a fast-paced, innovation-driven team focused on performance, scalability, and usability. Open Positions: Frontend Developer – Ionic + Vue 3 (Mobile & Web)Key Responsibilities Develop and maintain mobile and web applications using Ionic Framework with Vue 3 . Deliver responsive, high-performance experiences for Android, iOS, and Web platforms. Integrate frontend interfaces with backend REST APIs . Implement platform-specific functionalities using Capacitor/Cordova plugins (e.g., push notifications, file handling). Ensure optimal UI/UX design and consistency across devices. Handle app store submissions (Google Play, Apple App Store), including code signing and certificate management. Required Skills Proficiency in Vue 3 , Ionic Framework , TypeScript , and JavaScript . Experience developing and publishing mobile apps using Ionic + Capacitor . Demonstrated ability to manage app lifecycle on Google Play and Apple App Store . Strong grasp of responsive design , mobile best practices , and native integrations . Familiarity with Git, version control, and CI/CD pipelines. 2️ Backend Developer – Node.js + MySQL (API & Server)Key Responsibilities Design, build, and maintain secure RESTful APIs using Node.js and Express. Develop and optimize MySQL database schemas and queries . Deploy and manage services on Ubuntu/Linux servers (including monitoring and patching). Implement robust authentication, input validation , and error handling . Collaborate with frontend developers to define and refine API contracts . Maintain server logs, backups, SSL configurations, and firewall settings . Required Skills Strong experience with Node.js , Express , and MySQL . Deep understanding of RESTful API design principles and security practices. Hands-on deployment and server management experience on Ubuntu . Familiarity with PM2 , nginx , UFW , SSL , and cron jobs . Bonus: Experience with Redis , WebSockets , or real-time communication services . General Requirements (For Both Roles) Minimum 2 years of hands-on experience in the relevant tech stack. Strong debugging, analytical, and problem-solving skills. Ability to thrive in a collaborative, agile environment . Solid understanding of Git and code versioning workflows. High attention to detail with a passion for clean code, performance, and quality . Job Details Location: Kolkata Type: Full-time / Contract Industry: LegalTech / SaaS / Mobile & Web App Development Platforms: Android, iOS, Web Why Work With Us? Be a core part of a high-impact product shaping the future of legal services. Work with modern technologies in a performance-focused environment. Flexible, transparent, and developer-friendly team culture. Interested candidates can send their resumes and project portfolios to [hr@andwill.co.in] . [Ph : 9831622059] , Let's build something extraordinary together! Job Types: Full-time, Permanent, Fresher, Internship Pay: ₹15,000.00 - ₹25,000.00 per month Benefits: Cell phone reimbursement Flexible schedule Health insurance Provident Fund Work Location: In person Expected Start Date: 21/07/2025
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Experience: 5 to 9 years Location: Any UST location ) Job Type: Full-Time Mandatory Skills RESTful APIs: Strong experience designing, developing, and consuming RESTful web services. Proficiency in API security, versioning, and documentation (e.g., Swagger/OpenAPI). Data Privacy & Compliance Tools: Hands-on knowledge of tools and practices for ensuring GDPR, CCPA, HIPAA, or other relevant compliance. Familiarity with data classification, encryption standards, and audit frameworks. Cloud Platforms: Microsoft Azure: Solid understanding of Azure services including compute, storage, networking, and security. Alibaba Cloud: Experience with deployment and management of services on Alibaba Cloud. Robotic Process Automation (RPA) Tools: Proficiency with RPA tools such as UiPath, Automation Anywhere, or Blue Prism. Experience in designing, developing, and deploying bots for automating business processes. SQL & Data Handling: Strong command of SQL (queries, joins, stored procedures). Familiarity with data warehousing and ETL tools is a plus. Key Responsibilities Design and implement scalable backend services and APIs. Ensure application architecture aligns with data privacy and compliance standards. Work on cross-cloud deployments and integrations (Azure & Alibaba Cloud). Develop and manage automation workflows using RPA tools. Collaborate with cross-functional teams to ensure quality and timely delivery. Skills Restful Apis,DataPrivacy & ComplianceTools,Azure & AlibabaCloud,RoboticProcessAutomationTools
Posted 1 week ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Code Outputs Expected: Code as per design Follow coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure Define and govern configuration management plan Ensure compliance from the team Test Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project Manage delivery of modules and/or manage user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort estimation for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface With Customer Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications Take relevant domain/technology certification Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware’s. Strong analytical and problem-solving abilities Knowledge Examples Appropriate software programs / modules Functional and technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments Design, build, and maintain robust, reactive REST APIs using Spring WebFlux and Spring Boot Develop and optimize microservices that handle high throughput and low latency Write clean, testable, maintainable code in Java Integrate with MongoDB for CRUD operations, aggregation pipelines, and indexing strategies Apply best practices in API security, versioning, error handling, and documentation Collaborate with front-end developers, DevOps, QA, and product teams Troubleshoot and debug production issues, identify root causes, and deploy fixes quickly Required Skills & Experience: Strong programming experience in Java 17+ Proficiency in Spring Boot, Spring WebFlux, and Spring MVC Solid understanding of Reactive Programming principles Proven experience designing and implementing microservices architecture Hands-on expertise with MongoDB, including schema design and performance tuning Experience with RESTful API design and HTTP fundamentals Working knowledge of build tools like Maven or Gradle Good grasp of CI/CD pipelines and deployment strategies Skills Spring Webflux,Spring Boot,Kafka
Posted 1 week ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI Engineer Role Summary: We are seeking an AI Engineer with a strong background in Generative AI, Natural Language Processing (NLP), Large Language Models (LLMs), and agentic AI to join our growing team. You will focus on building robust, scalable AI solutions that harness Azure AI capabilities, with a strong emphasis on MLOps and LLM frameworks. You should have a passion for crafting high-quality, production-ready AI systems and a proven track record of delivering impactful solutions. Key Responsibilities: * Design & Develop: Architect and implement Generative AI applications using Azure OpenAI Services, Azure Machine Learning, and other Azure AI tools. Develop and fine-tune LLMs for domain-specific tasks using popular frameworks such as LangChain, LlamaIndex, Hugging Face Transformers, or similar. Build agentic AI systems capable of autonomous reasoning and task execution. * NLP and Advanced Language Models: Work with Natural Language Processing pipelines, including text classification, summarization, entity extraction, and question answering. * MLOps & Azure Integration: Familiarity with CI/CD pipelines, model versioning, deployment, and monitoring for AI models using Azure Machine Learning MLOps capabilities. Collaborate with DevOps team to ensure seamless integration of models into production systems. * Innovation & Scalability: Stay at the forefront of AI advancements, particularly in LLMs and Generative AI, to continuously enhance our AI solutions. Drive the adoption of best practices for scalable and secure AI system development on Azure. * Mentorship & Collaboration: Provide technical mentorship to junior engineers and contribute to knowledge-sharing within the team. Collaborate closely with cross-functional teams (Product, Engineering) to translate business requirements into technical solutions. Qualifications & Skills: * Experience: Minimum 4 years of experience in AI engineering, with a strong focus on Generative AI, LLMs, and NLP. Proven experience building and deploying AI models using Azure AI capabilities, including Azure OpenAI, Azure ML, and Azure Cognitive Services. Solid experience with LLM frameworks such as LangChain, LlamaIndex, Hugging Face, etc. Hands-on experience with MLOps pipelines and CI/CD practices. * Technical Skills: Proficiency in Python (including libraries like PyTorch, TensorFlow, and popular LLM frameworks). Strong understanding of LLM fine-tuning, prompt engineering, embedding models, and vector databases (e.g., Azure AI Search, Pinecone, Weaviate). Familiarity with containerization (Docker, Kubernetes) and cloud infrastructure, with a preference for Azure. * Soft Skills: Excellent problem-solving skills, with a focus on delivering production-ready solutions. Strong communication and collaboration abilities, with a knack for explaining complex AI concepts to non-technical stakeholders. PS:- This Opportunity is for Gurugram location only.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
7-9 yrs experience in React JS Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model Thorough understanding of React.js and its core principles Experience with popular React.js workflows (such as Flux or Redux) Familiarity with RESTful APIs Knowledge of modern authorization mechanisms, such as JSON Web Token, SCS Familiarity with modern front-end build pipelines and tools Experience with common front-end development tools such as Babel, Webpack, NPM, etc. Ability to understand business requirements and translate them into technical requirements Familiarity with code versioning tools such as GitHub
Posted 1 week ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
POSITION NAME : AI and Machine Learning Engineer Location: Noida, NCR Years of Experience: 4-8 Yrs About Dailoqa Dailoqa’s mission is to bridge human expertise and artificial intelligence to solve the challenges facing financial services. Our founding team of 20+ international leaders, including former CIOs and senior industry experts, combines extensive technical expertise with decades of real-world experience to create tailored solutions that harness the power of combined intelligence. With a focus on Financial Services clients we have deep expertise across Risk & Regulations, Retail & Institutional Banking, Capital Markets, and Wealth & Asset Management. Dailoqa has global reach in UK, Europe, Africa, India, ASEAN, and Australia. We integrate AI into business strategies to deliver tangible outcomes and set new standards for the financial services industry. Working at Dailoqa will be hard work, our environment is fluid and fast-moving and you'll be part of a community that values innovation, collaboration, and relentless curiosity. We’re looking at people who : · Are proactive, curious adaptable, and patient · Shape the company's vision and will have a direct impact on its success. · Have the opportunity for fast career growth. · Have the opportunity to participate in the upside of an ultra-growth venture. · Have fun 🙂 Don’t apply if: · You want to work on a single layer of the application. · You prefer to work on well-defined problems. · You need clear, pre-defined processes. · You prefer a relaxed and slow paced environment. Our Philosophy Small team : Small talented teams outperform large and slow-moving companies. We avoid bureaucracy, keep meetings to a minimum and focus on creating value. Simple where possible: We are passionate about new technology (in particular Machine Learning and AI), but we are more passionate about solving problems for our customers. We strive to find the best solution, be it cutting-edge or old-school. Customer obsessed: We take every opportunity to talk to our customers. We obsess over their problems and work every day to make them happy. About the Role We are looking for AI and Machine Learning engineer who want to help shape the future of Financial Services clients and our company. As part of the team, you will get to · Work directly with our founding team and be a core member. · Apply the latest AI techniques to solve real problems faced by Financial Services clients. · Design, build, and refine datasets to evaluate and continuously improve our solutions. · Participate in strategy and product ideation sessions, influencing our product and solution roadmap. Key Responsibilities · Agentic AI Development : Work on building scalable multi-modal Large Language Model (LLM) based AI agents, leveraging frameworks such as LangGraph, Microsoft Autogen, or Crewai. · AI Research and Innovation : Research and build innovative solutions to relevant AI problems, including Retrieval-Augmented Generation (RAG), semantic search, knowledge representation, tool usage, fine-tuning, and reasoning in LLMs. · Technical Expertise : Proficiency in a technology stack that includes Python, LlamaIndex / LangChain, PyTorch, HuggingFace, FastAPI, Postgres, SQLAlchemy, Alembic, OpenAI, Docker, Azure, Typescript, and React. · LLM and NLP Experience : Hands-on experience working with LLMs, RAG architectures, Natural Language Processing (NLP), or applying Machine Learning to solve real-world problems. · Dataset Development : Strong track record of building datasets for training and/or evaluating machine learning models. · Customer Focus : Enjoy diving deep into the domain, understanding the problem, and focusing on delivering value to the customer. · Adaptability : Thrive in a fast-paced environment and are excited about joining an early-stage venture. · Model Deployment and Management : Automate model deployment, monitoring, and retraining processes. · Collaboration and Optimization : Collaborate with data scientists to review, refactor, and optimize machine learning code. · Version Control and Governance : Implement version control and governance for models and data. Required Qualifications: · Bachelor's degree in computer science, Software Engineering, or a related field · 4-8 years of experience in MLOps, DevOps, or related roles Have strong programming experience and familiarity with Python based deep learning frameworks like Pytorch, JAX, Tensorflow Have strong familiarity and knowledge of machine learning concepts · Proficiency in cloud platforms (AWS, Azure, or GCP) and infrastructure-as-code tools like Terraform Desired Skills: · Experience with experiment tracking and model versioning tools You have experience with technology stack: Python, LlamaIndex / LangChain, PyTorch, HuggingFace, FastAPI, Postgres, SQLAlchemy, Alembic, OpenAI, Docker, Azure, Typescript, React. · Knowledge of data pipeline orchestration tools like Apache Airflow or Prefect · Familiarity with software testing and test automation practices · Understanding of ethical considerations in machine learning deployments · Strong problem-solving skills and ability to work in a fast-paced environment
Posted 1 week ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Manager, Scientific Data Engineering The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Join a team that is passionate about using data, analytics, and insights to drive decision-making and create custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company's IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps ensure we can manage and improve each location, from investing in the growth, success, and well-being of our people to making sure colleagues from each IT division feel a sense of belonging, to managing critical emergencies. Together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Design, develop, and maintain data pipelines to extract data from various sources and populate a data lake and data warehouse. Work closely with data scientists, analysts, and business teams to understand data requirements and deliver solutions aligned with business goals. Build and maintain platforms that support data ingestion, transformation, and orchestration across various data sources, both internal and external. Use data orchestration, logging, and monitoring tools to build resilient pipelines. Automate data flows and pipeline monitoring to ensure scalability, performance, and resilience of the platform. Monitor, troubleshoot, and resolve issues related to the data integration platform, ensuring uptime and reliability. Maintain thorough documentation for integration processes, configurations, and code to ensure easy onboarding for new team members and future scalability. Develop pipelines to ingest data into cloud data warehouses. Establish, modify and maintain data structures and associated components. Create and deliver standard reports in accordance with stakeholder needs and conforming to agreed standards. Work within a matrix organizational structure, reporting to both the functional manager and the project manager. Participate in project planning, execution, and delivery, ensuring alignment with both functional and project goals. What Should You Have Bachelor’s degree in information technology, Computer Science or any Technology stream. 3+ years of developing data pipelines & data infrastructure, ideally within a drug development or life sciences context. Demonstrated expertise in delivering large-scale information management technology solutions encompassing data integration and self-service analytics enablement. Experienced in software/data engineering practices (including versioning, release management, deployment of datasets, agile & related software tools). Ability to design, build and unit test applications on Spark framework on Python. Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on Databricks/ Hadoop. Experience working with storage frameworks like Delta Lake/ Iceberg Experience working with MPP Datawarehouse’s like Redshift Cloud-native, ideally AWS certified. Strong working knowledge of at least one Reporting/Insight generation technology Good interpersonal and communication skills (verbal and written). Proven record of delivering high-quality results. Product and customer-centric approach. Innovative thinking, experimental mindset. Mandatory Skills Skill Category Skills Foundational Data Concepts SQL (Intermediate / Advanced) Python (Intermediate) Cloud Fundamentals (AWS Focus) AWS Console, IAM roles, regions, concept of cloud computing AWS S3 Data Processing & Transformation Apache Spark (Concepts & Usage) Databricks (Platform Usage), Unity Catalog, Delta Lake ETL & Orchestration AWS Glue (ETL, Catalog), Lambda Apache Airflow (DAGs and Orchestration) or other orchestration tool dbt (Data Build Tool) Matillion (or similar ETL tool) Data Storage & Querying Amazon Redshift / Azure Synapse Trino / Equivalent AWS Athena / Query Federation Data Quality & Governance Data Quality Concepts / Implementation Data Observability Concepts Collibra / equivalent tool Real-time / Streaming Apache Kafka (Concepts & Usage) DevOps & Automation CI / CD concepts, Pipelines (GitHub Actions / Jenkins / Azure DevOps) Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 08/20/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R353508
Posted 1 week ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview As a Software Engineer you will design, develop, and maintain software systems. This role involves both creative and analytical skills to solve complex problems and create efficient, reliable software. You will use your expertise in requirements analysis, programming languages, software development methodologies, and tools to build and deliver software products that meet the needs of businesses, organizations, or end-users. You will work with other engineers, product managers and delivery leads, to design systems, determine functional and non-functional needs and implement solutions accordingly. You should be ready to work independently as well as in a team. What Will You Do In This Role With a wealth of knowledge and hands-on experience, regularly mentor peers, provide help, aid in defining standards, and identify reusable code or application modules. Create and document detailed designs for custom software applications or components. Design, code, verify, test, document, amend and refactor moderately complex applications and software configurations for deployment in collaboration with cross-disciplinary teams across various regions worldwide. Elicit requirements for systems and software life cycle working practices and automation. Prepare design options for the working environment of methods, procedures, techniques, tools, and people. Utilize systems and software life cycle working practices for software components and micro-services. Deploy automation to achieve well-engineered and secure outcome. Contribute to the development of solution architectures in specific business, infrastructure or functional areas. Identify and evaluate alternative architectures and the trade-offs in cost, performance and scalability. Work within a matrix organizational structure, reporting to both the functional manager and the Product manager. Participate in Product planning, execution, and delivery, ensuring alignment with both functional and Product goals. What Should You Have Bachelors’ degree in Information Technology, Computer Science or any Technology stream. 7+ years of hands-on experience working with technologies - HTML, CSS, REST API, HTTP, SQL and databases, at least one programming language from our supported stack (TypeScript / Node / React, Java, Python, .NET) Working with DevSecOps tools for deploying and versioning code. Familiarity with modern product development practices – Agile, Scrum, test driven development, UX, design thinking. Familiarity with DevOps practices (Git, Docker, infrastructure as code, observability, continuous integration/continuous deployment - CI/CD). Cloud-native, ideally AWS certified. Product and customer-centric approach. Experience with other programming languages (Python, Java, TypeScript, .NET) is a nice to have. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Data Engineering, Data Visualization, Design Applications, Software Configurations, Software Development, Software Development Life Cycle (SDLC), Solution Architecture, System Designs, Systems Integration, Testing Preferred Skills Job Posting End Date 07/28/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R353505
Posted 1 week ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Data Scientists for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Data Scientists with expertise in predictive modeling, statistical analysis, and A/B testing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Lead design, development, and deployment of scalable data science solutions optimizing large-scale data pipelines in collaboration with engineering teams. Architect advanced machine learning models (deep learning, RL, ensemble) and apply statistical analysis for business insights. Apply statistical analysis, predictive modeling, and optimization techniques to derive actionable business insights. Own the full lifecycle of data science projects—from data acquisition, preprocessing, and exploratory data analysis (EDA) to model development, deployment, and monitoring. Implement MLOps workflows (model training, deployment, versioning, monitoring) and conduct A/B testing to validate models. Required Skills: Expert in Python, data science libraries (Pandas, NumPy, Scikit-learn), and R with extensive experience with machine learning (XGBoost, PyTorch, TensorFlow) and statistical modeling. Proficient in building scalable data pipelines (Apache Spark, Dask) and cloud platforms (AWS, GCP, Azure). Expertise in MLOps (Docker, Kubernetes, MLflow, CI/CD) along with strong data visualization skills (Tableau, Plotly Dash) and business acumen. Nice to Have: Experience with NLP, computer vision, recommendation systems, or real-time data processing (Kafka, Flink). Knowledge of data privacy regulations (GDPR, CCPA) and ethical AI practices. Contributions to open-source projects or published research. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, European Union’s leading bank with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 10000 employees, to provide support and develop best-in-class solutions. About BNP Paribas Group BNP Paribas is the European Union’s leading bank and a key player in international banking. It operates in 65 countries and has nearly 185,000 employees, including more than 145,000 in Europe. The Group has key positions in its three main fields of activity: Commercial, Personal Banking & Services for the Group’s commercial & personal banking and several specialized businesses including BNP Paribas Personal Finance and Arval; Investment & Protection Services for savings, investment, and protection solutions; and Corporate & Institutional Banking, focused on corporate and institutional clients. Based on its strong diversified and integrated model, the Group helps all its clients (individuals, community associations, entrepreneurs, SMEs, corporate and institutional clients) to realize their projects through solutions spanning financing, investment, savings and protection insurance. In Europe, BNP Paribas has four domestic markets: Belgium, France, Italy, and Luxembourg. The Group is rolling out its integrated commercial & personal banking model across several Mediterranean countries, Turkey, and Eastern Europe. As a key player in international banking, the Group has leading platforms and business lines in Europe, a strong presence in the Americas as well as a solid and fast-growing business in Asia-Pacific. BNP Paribas has implemented a Corporate Social Responsibility approach in all its activities, enabling it to contribute to the construction of a sustainable future, while ensuring the Group's performance and stability Commitment to Diversity and Inclusion At BNP Paribas, we passionately embrace diversity and are committed to fostering an inclusive workplace where all employees are valued, respected and can bring their authentic selves to work. We prohibit Discrimination and Harassment of any kind, and our policies promote equal employment opportunity for all employees and applicants, irrespective of, but not limited to their gender, gender identity, sex, sexual orientation, ethnicity, race, color, national origin, age, religion, social status, mental or physical disabilities, veteran status etc. As a global Bank, we truly believe that inclusion and diversity of our teams is key to our success in serving our clients and the communities we operate in. About Business line/Function: ARVAL Service Lease Arval is car renting company with Services, for professional entities and more recently, opened to retail market. Serving 30 countries with the goal of being the best in class both for customer satisfaction and innovation (Partnership, new technologies, etc.) Arval IT is the IT department of the company. “Data & AI” Tribe oversees developing and maintaining all the IT assets in relationship with Data activity from the company. The organization of the “Data & AI” Tribe is composed by 10 Squads, Business oriented, dispatch in 5 Domains: 3 Squads for “Sales & Clients” Domain: in charge of changes and maintenance of the Data sales projects. Squads are mainly focused on the following activities: MyFleetStatus and partnership with Elements, Amazon Reporting, Customer Marketing, Printing activities for G4 countries. 2 Squads for “Operations” Domain: responsible of the Change and Run for Reporting, centralization of data in ODS, and data flows related to the following functional scope: Operations services (insurance, maintenance, tires, glass, fuel card, …), Buy, Delivery and End of Contract, Telematics, Complain Management. 1 Squad for “Data Management & Security” Domain: objective is to offer Capabilities to support Business Driven Use Cases by providing Data Platforms connected to valuable Data Sources for Data Preparation and Data Science, by sharing Knowledge on Data Sources and Data Products, promoting the Data Management By Design approach in respect of the BNPP and ARVAL Data Governance, by consume the Data Products through Data Visualization or integrated in Arval Applications. 2 Squad for Foundations Domain: define a Business Data Architecture, based on clear principles, accompany new projects to integrate them into the Datahub framework, stabilize and improve performance, accessibility, monitoring of Data assets. These domains handle the management of Document (EDM). 2 Squad for “AI & Smart Automation” Domain: Manage RPA activities to build robot with Blueprism to automate and ease the life of Arval business. And a dedicated Artificial Intellignece squad working on Data Science, Machine Learning, Intelligent Document Processing, Generative AI… Each Squad is organized and composed as an Agile system: 1 IT Product Owner, 1 Scrum Master and 1 Dev Team (composed by Designers and Engineers). Job Title Engineer (Developer) Date 01/01/2024 Department One IT Location: Offshore Business Line / Function IT Data Tribe Engineer (Java) Reports To (Direct) Engineer Chapter Lead Grade (if applicable) N/A (Functional) Squad IT PO Number Of Direct Reports Directorship / Registration: NA Position Purpose The engineer/developer is working closely with his team members and business teams to develop features according to a requirement. He must also ensure that developments are aligned with best practices and monitor what has been promoted on higher environments. Production or tests environments supports activities are also part of the position (job monitoring, issues solving, …) Responsibilities Direct Responsibilities Participate to all the Agile ceremonies of the squad (Dailys, Sprint plannings, backlog refinements, reviews, etc…) Communicate ASAP on the blocking points Estimate, design and build technical solution for business needs & requirements according to the Jira requirements (Change, bug fixing, …). Do the unit tests of the code developed to deliver the code for user acceptance test Design technical solution for business needs & requirements Maintain a clean code or a robust infrastructure (depending on developer / OPs expertise) Raise technical improvements and impediments Contributing Responsibilities Accountable to deliver the amount of jira tickets assigned to him/her during a sprint Accountable to do the daily support/monitoring Accountable to contribute to the Engineer Chapter events and Tribe community (Guild with the Engineer Techlead) Accountable on the platform behavior – ensure platform is up and running (stability) with the tech lead and other members of the guild Technical & Behavioral Competencies Be autonomous on his asset (Senior to expert level) and motivated on the daily tasks. Be an active member of the squad (often multi-technologies). Be proactive by raising alerts when it’s flagged, try to understand the global context and suggest how to solve an issue (without focusing only on his subpart). it will also help to understand better the rest of the team (who will not necessarily work on the same tool) Be able to provide proofs that developments are matching to the requirement (tests cases) Have proper communication with the Business. (English language) Have some knowledge regarding deployment/versioning tools (Git, Jenkins, …) Have some knowledge on project tracking software (Jira, QC, …) Have some knowledge regarding monitoring tools (Centreon, Dynatrace, …) Have a strong background in SQL and be able to check some test cases on his own directly by launching some SQL requests on the different databases Provide his technical expertise to suggest optimization and technical enhancements Unix/Windows Specific Technical Skills Required For This Role Java (V8/V17) mainly backend Springboot (V2/V3)/Maven/Git/Jenkins Container (Docker/Kubernetes) Having some skills around Python and Kafka can be an advantage. Skills Referential Specific Qualifications (if required) Behavioural Skills: (Please select up to 4 skills) Attention to detail / rigor Adaptability Creativity & Innovation / Problem solving Ability to deliver / Results driven Transversal Skills: (Please select up to 5 skills) Analytical Ability Ability to develop and adapt a process Ability to anticipate business / strategic evolution Ability to understand, explain and support change Education Level Ability to develop others & improve their skills Bachelor Degree or equivalent Experience Level At least 5 years
Posted 1 week ago
0.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Responsibilities: Building optimized, scalable, and efficient applications. Troubleshooting and debugging to optimize performance Developing and coding back-end components and connecting applications to other web services Exploring new technology solutions to enhance functionality continually Providing code documentation and other inputs to technical documents Participating in code reviews Key Requirements: You possess strong knowledge of common Goroutine and channel patterns. You have expertise in the full suite of Go and Python frameworks and tools. You have an understanding of dependency management tools such as Godep, Sltr, etc. You have strong knowledge of Go templating language and code generation tools, such as Stringer. You are experienced in using code versioning tools such as Git or equivalent. You have experience with RESTful APIs. You have experience with database systems (SQL/ No-SQL). You are familiar with various testing tools. Amazon Web Services (AWS) infrastructure knowledge is a plus Experience of Agile methodologies (such as Scrum, and Kanban). Knowledge of containerisation technologies like Docker and Kubernetes. Characteristics we look for: Excellent communication skills. Ability to work collaboratively in a team-oriented environment. Work from office is mandatory. Job Type: Full-time Pay: Up to ₹900,000.00 per year Ability to commute/relocate: Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Years of experience in Python? Experience: GoLang: 2 years (Required) Work Location: In person Speak with the employer +91 8056691380
Posted 1 week ago
0.0 - 1.0 years
0 - 0 Lacs
Nehru Place, Delhi, Delhi
On-site
We are looking for experienced PHP developer to manage our front end as well as back-end services and ensure a seamless interchange of data between the server and our users. As a PHP developer, you will be responsible for developing and coding all server-side logic. You will also be required to maintain the central database and respond to requests from front-end developers. To ensure success as a PHP developer, you should have in-depth knowledge of object-oriented PHP programming, understanding of MVC designs, and working knowledge of front-end technologies including HTML5, JavaScript, and CSS3. Ultimately, a top-level PHP Developer can design and build efficient PHP modules while seamlessly integrating front-end technologies. PHP Developer Responsibilities: Update and maintain existing website pages Create and implement responsive designs for all devices Ensure speed optimization and performance-friendly code Regularly update banners, images, and content blocks Fix layout bugs and enhance UI/UX Conducting analysis of website and application requirements. Writing back-end code and building efficient PHP modules. Developing back-end portals with an optimised database. Troubleshooting application and code issues. Integrating data storage solutions. Responding to integration requests from front-end developers. Finalising back-end features and testing web applications. Updating and altering application features to enhance performance. PHP Developer Requirements: Bachelor’s degree in computer science or a similar field. Knowledge of PHP web frameworks including SQL,Yii, Laravel, and CodeIgniter. Knowledge of front-end technologies including CSS3, JavaScript, and HTML5. Understanding of object-oriented PHP programming. Previous experience creating scalable applications. Proficient with code versioning tools including Git, Mercurial, CVS, and SVN. Familiarity with SQL/NoSQL databases. Ability to project manage. Good problem-solving skills. Job Type: Full-time Pay: ₹15,000.00 - ₹30,000.00 per month Schedule: Day shift Ability to commute/relocate: Nehru Place, New Delhi, Delhi: Reliably commute or planning to relocate before starting work (Preferred) Education: Diploma (Preferred) Experience: PHP Web Developer: 1 year (Required)
Posted 1 week ago
0.0 - 5.0 years
0 Lacs
Thiruvananthapuram District, Kerala
On-site
Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities: Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements: 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate send mail to :- thasleema@qcentro.com Job Type: Permanent Ability to commute/relocate: Thiruvananthapuram District, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Devops and MLops: 5 years (Required) Work Location: In person
Posted 1 week ago
10.0 years
0 Lacs
India
Remote
Job Title: SharePoint Architect & Senior Developers – Remote (PAN India) Location: Remote (Work from Home, PAN India) Client: A Global MNC (30,000+ Employees | Enterprise IT Solutions) Experience: SharePoint Architect: 10+ years Senior SharePoint Developers: 7+ years Job Description: We are hiring for a leading Global MNC with a workforce of 30,000+ employees to build advanced SharePoint Document Management Systems (DMS) and Intranet solutions . We seek an experienced SharePoint Architect and Senior SharePoint Developers with deep expertise in PnP PowerShell, SPFx with React.js , and enterprise-grade SharePoint deployments. Key Responsibilities: For SharePoint Architect (10+ Years): Design and implement large-scale SharePoint DMS/Intranet solutions for a global workforce. Lead SharePoint Online/On-Prem migrations , governance, and architecture for the MNC client. Automate processes using PnP PowerShell (site provisioning, templates, governance). Develop SPFx solutions (React.js) for custom web parts, extensions, and integrations. Optimize document management workflows , metadata, permissions, and search. Integrate SharePoint with Azure services (Azure AD, Logic Apps, MS Graph API) . For Senior SharePoint Developers (7+ Years): Build SPFx (React.js) components for SharePoint Online/2019/2016. Implement PnP PowerShell scripts for automation and bulk operations. Enhance DMS capabilities (versioning, records management, compliance). Work with CSOM, REST API, and SharePoint Search for custom solutions. Follow CI/CD best practices (Azure DevOps/GitHub Actions) for SPFx deployments. Mandatory Skills: ✔ SharePoint DMS (Document Management) expertise – Not just internet portals ✔ PnP PowerShell scripting (Must have) ✔ SPFx with React.js (Web parts, Extensions) ✔ SharePoint Online/2019/2016 Intranet solutions ✔ CSOM, REST API, SharePoint Search ✔ Azure AD Integration (Optional but preferred) Not Required: ❌ Power Apps / Power Automate (Not part of this project) ❌ SharePoint public-facing portals Preference will be given to immediate joiners. Drop your updated resume to resumes@ingeniumnext.com earliest
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Summary As a Data Scientist you will build and deploy data-driven solutions to support business goals. You will use your skills in data analytics, machine learning (supervised and unsupervised) and GenAI, to translate complex data into actionable insights. As a data scientist you will work closely with cross-functional team of data engineers, product owners, Devops and bridge the gap between technical implementation and business needs. Responsibilities (Other duties may be assigned.) Experiment and feature engineer with data to design and build machine/deep learning models with appropriate precision, recall, F1 scores to meet the use case need Prompt engineer to develop new and enhance existing Gen-AI applications. (Chatbots, RAG). Develop and implement advanced AI agents capable of performing autonomous tasks, decision-making, and executing requirement-specific workflows. Document and create experiment reports on the implementation and code in a way that is clear and accessible to both technical and non-technical team members. Perform advanced data analysis, manipulation, and cleansing to extract actionable insights from structured and unstructured data. Create scalable and efficient recommendation systems that enhance user personalization and engagement. Effectively communicate technical solutions and findings to both technical and non-technical stakeholders. Design and deploy AI-driven chatbots and virtual assistants, focusing on natural language understanding and contextual relevance. Implement and optimize supervised and unsupervised learning models for NLP tasks, including text classification, sentiment analysis, and language generation. Explore, understand, and develop state-of-the-art technologies for AI agents, integrating them with broader enterprise systems. Collaborate with cross-functional teams to gather business requirements and deliver AI-driven solutions tailored to specific use cases. Automate workflows using advanced AI tools and frameworks to increase efficiency and reduce manual interventions. Stay informed about cutting-edge advancements in AI, machine learning, NLP, and Gen AI applications, and assess their relevance to the organization. Education and/or experience: At least 5 years of experience working with data sciences. Preferably with a bachelors (OR Master) degree in Computer Science, Data Science, or Artificial Intelligence. Knowledge Skills and Abilities: Strong understanding of mathematics including vector algebra and probability theory for understanding and explaining machine learning (discriminative and generative) models. Strong expertise in data analytics, pattern recognition, machine learning, including predictive modeling and recommendation systems. Excellent communication & documentation skills to articulate complex ideas to diverse audiences. Hands-on experience with large datasets and using distributed systems for analytics and modelling. Advanced understanding of natural language processing (NLP) techniques and tools, including transformers like BERT, GPT, or similar models including open-source LLMs. Strong knowledge of cloud platforms (AWS) for deploying and scaling AI models. Proficiency with code versioning platforms like CodeCommit and GitHub. Technical Skills: Python proficiency and hands-on experience with libraries like (Pandas, Dask, Numpy, Matplotlib, NLTK, Sklearn, Pytorch and Tensorflow). Experience in prompt engineering for AI models to enhance functionality and adaptability. Familiarity with AI agent frameworks like LangChain, OpenAI APIs, or other agent-building tools. Advanced skills in Relational databases [Postgres], Vector Database, querying, analytics, semantic search, and data manipulation. Strong problem-solving and critical-thinking skills, with the ability to handle complex technical challenges. Hands-on experience working with API frameworks like Flask, FastAPI, etc. Proficiency with code versioning platforms like CodeCommit and GitHub. Preferred: Hands-on experience building and deploying conversational AI, chatbots, and virtual assistants. Familiarity with MLOps pipelines and CI/CD for AI/ML workflows. Experience with reinforcement learning or multi-agent systems. Language Skills Ability to speak the English language proficiently, both verbally and in writing. Work Environment The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Employee works primarily in a home office environment. The home office must be a well-defined work area, separate from normal domestic activity and complete with all essential technology including, but not limited to; separate phone, scanner, printer, computer, etc. as required in order to effectively perform their duties. Compliance with all relevant FINEOS Global policies and procedures related to Quality, Security, Safety, Business Continuity, and Environmental systems. Travel and fieldwork, including international travel may be required. Therefore, employee must possess, or be able to acquire a valid passport. Must be legally eligible to work in the country in which you are hired. FINEOS is an Equal Opportunity Employer. FINEOS does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.
Posted 1 week ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Senior Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.
Posted 1 week ago
8.0 years
132 - 180 Lacs
Hyderabad, Telangana, India
Remote
Location: Offshore (Onsite Delivery Model) Number of Roles: 3 Experience: 5–8 years Start Date: Immediate or within 15 days Employment Type: Contract / Full-time (as applicable) Job Overview We are seeking highly skilled GIS Data Engineers with strong expertise in Python , PySpark , and geospatial data processing to support the integration and analysis of spatial datasets. The selected candidates will work on large-scale data engineering projects in a distributed environment, focusing on building pipelines and workflows that support geospatial analytics and visualization. Key Responsibilities Develop and maintain scalable ETL pipelines using PySpark and Python for processing GIS and spatial datasets Work with large geospatial datasets from various sources (e.g., shapefiles, GeoJSON, raster formats, satellite data) Integrate geospatial data with enterprise systems (e.g., PostgreSQL/PostGIS, Big Data platforms) Optimize queries and data transformations for performance and accuracy Collaborate with data scientists, GIS analysts, and business users to deliver data solutions Implement best practices for data quality, versioning, and lineage in spatial data pipelines Troubleshoot data issues and support production data workflows Required Skills Strong hands-on experience with PySpark (Apache Spark with Python) Advanced Python programming skills, especially in data handling and automation Experience with geospatial libraries such as GeoPandas, Shapely, Fiona, Rasterio, or GDAL Proficient in working with GIS formats (shapefiles, raster data, GeoTIFFs, KML, etc.) Knowledge of spatial databases like PostGIS or GeoServer Hands-on experience with data lakes, big data processing, and cloud platforms (Azure/AWS/GCP – preferred) Strong understanding of data structures, algorithms, and spatial indexing Nice To Have Familiarity with MapReduce, Hive, or Databricks Prior experience in geospatial analytics, remote sensing, or urban planning domains Knowledge of DevOps, CI/CD pipelines, and containerization tools (Docker, Kubernetes) Soft Skills Excellent communication and collaboration abilities Problem-solving mindset and attention to detail Ability to work independently in a client-facing onsite role Skills: postgis,rasterio,azure,gcp,shapely,fiona,big data platforms,pyspark,gis,geopandas,gdal,aws,python
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of a Consultant Specialist. In this role, you will You will be part of the IT Compute pod, in charge of the component of Pricing on both the legacy and strategic platforms. You will participate to the development on Sophis, on our regression tests and the maintenance/enhancements of our infrastructure (both on premise and gcp) Requirements To be successful in this role, you should meet the following requirements: B.E/B.Tech/M.E/M.S degree with 7-10 years of IT experience Excellent programming skills in Java Handson programming experience in Java, Python and Sql is must Good knowledge of Relational Database like Oracle PL/SQL programming skills Experience in C++ and C# would be a strong asset Excellent programming skills and complete Hands-On expertise. Sound knowledge of data structures, algorithms and design patterns Working knowledge of one or more of Apache Kafka, REST APIs, Docker, Kubernetes would be a great advantage Proficient in code versioning tools such as GIT, SVN etc Good understanding of agile methodologies and DevOps tooling such as JIRA, Maven, Jenkins, Ansible etc. Sound knowledge on functions, triggers, materialized views, DB management, Schema design Technology / programming language should not be a barrier to getting things done. Open to work on multiple technologies as required. Proficient in Data Structures & Algorithms and their practical usages Experience mentoring juniors and designing and building systems from scratch End to end ownership of delivery of a product (from requirement to production). Excellent problem solving and logical reasoning skills. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Job Description: Responsibilities: - Create high-level design based on requirements provided Design and implement changes to existing software architecture Build highly sophisticated enhancements and optimize working code Build and execute unit tests and plans Understand programming tasks allocated and complete the tasks as per schedule Adhere to quality standards and follow development guidelines Technical Skills: - Strong knowledge of Node.js and Webservices Strong knowledge of SQL, NoSQL and optimization techniques Proficient in code versioning tools like Git Knowledge of Openshift, PCF, Docker, Kubernetes will be plus
Posted 1 week ago
5.0 years
0 Lacs
Nagpur, Maharashtra, India
Remote
React Full Stack Developer 📍 Location: Nagpur, Maharashtra, India (On-site) 🕒 Experience: Minimum 5 Years 📄 Employment Type: Full-Time Position Summary We are looking for a highly skilled React Full Stack Developer with at least 5 years of hands-on experience to join our on-site team in Nagpur. The ideal candidate will have a strong background in building scalable web applications using React.js , Node.js , and modern cloud infrastructure. You’ll collaborate with cross-functional teams to develop dynamic user interfaces and robust backend services, while driving the architecture and delivery of innovative digital products. Key Responsibilities Develop responsive, reusable UI components using React.js Design and implement RESTful APIs using Node.js and Express.js Work with MongoDB or other NoSQL/SQL databases for data modeling and integration Collaborate closely with designers, product managers, and backend engineers Optimize application performance across multiple browsers and devices Handle code versioning and deployment via CI/CD pipelines (Jenkins, GitHub Actions, etc.) Manage application hosting on cloud platforms like AWS , GCP , or Azure Conduct unit testing, debugging, and maintenance of both frontend and backend code Mentor junior developers and contribute to technical discussions and architecture decisions Stay up-to-date with industry trends, tools, and best practices Required Qualifications Technical Skills Proficient in React.js , JavaScript (ES6+) , HTML5 , and CSS3 Experience with Redux or similar state management libraries Strong backend development experience using Node.js , Express.js Experience with MongoDB (or equivalent SQL/NoSQL databases) Understanding of RESTful API principles and integration Familiarity with JWT and other authentication/authorization techniques Experience with Docker and container-based development Comfortable working with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI Experience managing cloud infrastructure ( AWS , GCP , or Azure ) Soft Skills Strong problem-solving and debugging abilities Effective communication and collaboration in cross-functional teams Self-driven with leadership capabilities and mentoring experience Agile/Scrum methodology experience is a plus Experience Minimum 5 years of relevant full-stack development experience Education Bachelor's degree in Computer Science , Software Engineering , or a related field Preferred/Bonus Skills Knowledge of server-side events , WebSockets , or real-time data streaming Familiarity with Infrastructure as Code (IaC) tools like Terraform or Ansible Experience in working with distributed teams and remote collaboration tools What We Offer Work on live, enterprise-grade projects Structured, growth-oriented development environment Strong opportunities for long-term career advancement
Posted 1 week ago
5.0 years
0 Lacs
Thiruvananthapuram Taluk, India
On-site
Job Description NP- immediate Location-Technopark, Trivandrum Experience- 5+ years (Mandatory) We are seeking a highly skilled and experienced Senior AI Engineer with a minimum of 5 years of hands-on experience in designing, developing, and deploying robust Artificial Intelligence and Machine Learning solutions. The ideal candidate will be a strong problem-solver, adept at translating complex business challenges into scalable AI models, and capable of working across the entire AI/ML lifecycle, from data acquisition to model deployment and monitoring. This role requires a deep understanding of various AI techniques, strong programming skills, and a passion for staying updated with the latest advancements in the field. Key Responsibilities: ● AI/ML Solution Design & Development: ○ Lead the design and development of innovative and scalable AI/ML models and algorithms to address specific business needs and optimize processes. ○ Apply various machine learning techniques including supervised, unsupervised, and reinforcement learning, deep learning, natural language processing (NLP), and computer vision. ○ Collaborate with data scientists, product managers, and other stakeholders to understand business requirements and translate them into technical specifications. ● Data Management & Preprocessing: ○ Oversee the collection, preprocessing, cleaning, and transformation of large and complex datasets to prepare them for AI model training. ○ Implement efficient data pipelines and ensure data quality and integrity. ○ Perform exploratory data analysis to uncover insights and inform model development. ● Model Training, Evaluation & Optimization: ○ Train, fine-tune, and evaluate AI/ML models for optimal accuracy, performance, and generalization. ○ Select the most suitable models and algorithms for specific tasks and optimize hyperparameters. ○ Conduct rigorous testing and debugging of AI systems to ensure reliability and desired outcomes. ● Deployment & MLOps: ○ Lead the productionization of AI/ML models, ensuring seamless integration with existing systems and applications. ○ Implement MLOps best practices for model versioning, deployment, monitoring, and retraining. ○ Develop and maintain APIs for AI model integration. ● Research & Innovation: ○ Continuously research and evaluate the latest advancements in AI/ML research, tools, and technologies. ○ Propose and implement innovative solutions to complex problems. ○ Contribute to the strategic direction of AI initiatives within the company. ● Collaboration & Mentorship: ○ Work collaboratively with cross-functional teams (e.g., software development, data science, product teams). ○ Clearly articulate complex AI concepts to both technical and non-technical audiences. ○ Mentor junior AI engineers and contribute to a culture of continuous learning. Required Skills and Qualifications: ● Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a related quantitative field. ● Minimum of 5 years of professional experience as an AI Engineer, Machine Learning Engineer, or a similar role. ● Expert-level proficiency in at least one major programming language used for AI development, such as Python (preferred), Java, or C++. ● Extensive experience with popular AI/ML frameworks and libraries, such as TensorFlow, PyTorch, Keras, Scikit-learn, Hugging Face Transformers. ● Strong understanding of core machine learning concepts, algorithms, and statistical modeling (e.g., regression, classification, clustering, dimensionality reduction). ● Solid knowledge of deep learning architectures (e.g., CNNs, RNNs, Transformers) and their applications. ● Experience with data manipulation and analysis libraries (e.g., Pandas, NumPy). ● Familiarity with database systems (SQL, NoSQL) and big data technologies (e.g., Apache Spark, Hadoop) for managing large datasets. ● Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their AI/ML services for scalable deployment. ● Understanding of software development best practices, including version control (Git), testing, and code review. ● Excellent problem-solving skills, analytical thinking, and a data-driven approach. ● Strong communication and interpersonal skills, with the ability to explain technical concepts clearly. ● Ability to work independently and as part of a collaborative team in a fast-paced environment. ● Immediate Joiner preferred
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introducing Thinkproject Platform Pioneering a new era and offering a cohesive alternative to the fragmented landscape of construction software, Thinkproject seamlessly integrates the most extensive portfolio of mature solutions with an innovative platform, providing unparalleled features, integrations, user experiences, and synergies. By combining information management expertise and in-depth knowledge of the building, infrastructure, and energy industries, Thinkproject empowers customers to efficiently deliver, operate, regenerate, and dispose of their built assets across their entire lifecycle through a Connected Data Ecosystem. We are seeking a hands-on Applied Machine Learning Engineer to join our team and lead the development of ML-driven insights from historical data in our contracts management, assets management and common data platform. This individual will work closely with our data engineering and product teams to design, develop, and deploy scalable machine learning models that can parse, learn from, and generate value from both structured and unstructured contract data. You will use BigQuery and its ML capabilities (including SQL and Python integrations) to prototype and productionize models across a variety of NLP and predictive analytics use cases. Your work will be critical in enhancing our platform’s intelligence layer, including search, classification, recommendations, and risk detection. What your day will look like Key Responsibilities Model Development: Design and implement machine learning models using structured and unstructured historical contract data to support intelligent document search, clause classification, metadata extraction, and contract risk scoring. BigQuery ML Integration: Build, train, and deploy ML models directly within BigQuery using SQL and/or Python, leveraging native GCP tools (e.g., Vertex AI, Dataflow, Pub/Sub). Data Preprocessing & Feature Engineering: Clean, enrich, and transform raw data (e.g., legal clauses, metadata, audit trails) into model-ready features using scalable and efficient pipelines. Model Evaluation & Experimentation: Conduct experiments, model validation, A/B testing, and iterate based on precision, recall, F1-score, RMSE, etc. Deployment & Monitoring: Operationalize models in production environments with monitoring, retraining pipelines, and CI/CD best practices for ML (MLOps). Collaboration: Work cross-functionally with data engineers, product managers, legal domain experts, and frontend teams to align ML solutions with product needs. What you need to fulfill the role Skills And Experience Education: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related field. ML Expertise: Strong applied knowledge of supervised and unsupervised learning, classification, regression, clustering, feature engineering, and model evaluation. NLP Experience: Hands-on experience working with textual data, especially in NLP use cases like entity extraction, classification, and summarization. GCP & BigQuery: Proficiency with Google Cloud Platform, especially BigQuery and BigQuery ML; comfort querying large-scale datasets and integrating with external ML tooling. Programming: Proficient in Python and SQL; familiarity with libraries such as Scikit-learn, TensorFlow, PyTorch, Keras. MLOps Knowledge: Experience with model deployment, monitoring, versioning, and ML CI/CD best practices. Data Engineering Alignment: Comfortable working with data pipelines and tools like Apache Beam, Dataflow, Cloud Composer, and pub/sub systems. Version Control: Strong Git skills and experience collaborating in Agile teams. Preferred Qualifications Experience working with contractual or legal text datasets. Familiarity with document management systems, annotation tools, or enterprise collaboration platforms. Exposure to Vertex AI, LangChain, RAG-based retrieval, or embedding models for Gen AI use cases. Comfortable working in a fast-paced, iterative environment with changing priorities. What we offer Lunch 'n' Learn Sessions I Women's Network I LGBTQIA+ Network I Coffee Chat Roulette I Free English Lessons I Thinkproject Academy I Social Events I Volunteering Activities I Open Forum with Leadership Team (Tp Café) I Hybrid working I Unlimited learning We are a passionate bunch here. To join Thinkproject is to shape what our company becomes. We take feedback from our staff very seriously and give them the tools they need to help us create our fantastic culture of mutual respect. We believe that investing in our staff is crucial to the success of our business. Your contact: Mehal Mehta Please submit your application, including salary expectations and potential date of entry, by submitting the form on the next page. Working at thinkproject.com - think career. think ahead.
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Associate Project Manager – AI/ML Experience: 8+ years (including 3+ years in project management) Notice Period: Immediate to 15 days Location: Coimbatore / Chennai 🔍 Job Summary We are seeking experienced Associate Project Managers with a strong foundation in AI/ML project delivery. The ideal candidate will have a proven track record of managing cross-functional teams, delivering complex software projects, and driving AI/ML initiatives from conception to deployment. This role requires a blend of project management expertise and technical understanding of machine learning systems, data pipelines, and model lifecycle management. ✅ Required Experience & Skills 📌 Project Management Minimum 3+ years of project management experience, including planning, tracking, and delivering software projects. Strong experience in Agile, Scrum, and SDLC/Waterfall methodologies. Proven ability to manage multiple projects and stakeholders across business and technical teams. Experience in budgeting, vendor negotiation, and resource planning. Proficiency in tools like MS Project, Excel, PowerPoint, ServiceNow, SmartSheet, and Lucidchart. 🤖 AI/ML Technical Exposure (Must-Have) Exposure to AI/ML project lifecycle: data collection, model development, training, validation, deployment, and monitoring. Understanding of ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and data platforms (e.g., Azure ML, AWS SageMaker, Databricks). Familiarity with MLOps practices, model versioning, and CI/CD pipelines for ML. Experience working with data scientists, ML engineers, and DevOps teams to deliver AI/ML solutions. Ability to translate business problems into AI/ML use cases and manage delivery timelines. 🧩 Leadership & Communication Strong leadership, decision-making, and organizational skills. Excellent communication and stakeholder management abilities. Ability to influence and gain buy-in from executive sponsors and cross-functional teams. Experience in building and maintaining relationships with business leaders and technical teams. 🎯 Roles & Responsibilities Lead AI/ML and software development projects from initiation through delivery. Collaborate with data science and engineering teams to define project scope, milestones, and deliverables. Develop and maintain detailed project plans aligned with business goals and technical feasibility. Monitor progress, manage risks, and ensure timely delivery of AI/ML models and software components. Coordinate cross-functional teams and ensure alignment between business, data, and engineering stakeholders. Track project metrics, ROI, and model performance post-deployment. Ensure compliance with data governance, security, and ethical AI standards. Drive continuous improvement in project execution and delivery frameworks. Stay updated on AI/ML trends and contribute to strategic planning for future initiatives.
Posted 1 week ago
5.0 years
0 Lacs
Kota, Rajasthan, India
Remote
Overview Jhpiego is a nonprofit global health leader and Johns Hopkins University affiliate that is saving lives, improving health and transforming futures. We partner with governments, health experts and local communities to build the skills and systems that guarantee a healthier future for women and families. Jhpiego translates the best science and practice into moments of care that can mean the difference between life and death for women and families. The moment a woman gives birth; the moment a midwife helps a newborn to breath. Through our partnerships, we are revolutionizing health care for the world’s most disadvantaged and vulnerable people. In India, Jhpiego works across various states in close collaboration with national and state governments, providing technical assistance in the areas of family planning, maternal and child health, strengthening human resources for health, and non-communicable diseases. These programs are funded by Takeda Pharmaceutical Company Limited, UNICEF, World Health Organization, University of Manitoba, Bill & Melinda Gates Foundation, Children’s Investment Fund Foundation (CIFF), MSD for Mothers and others. Under the Project Born Healthy, Jhpiego is hiring for Technical Officer. The officer will be responsible for the development, optimization and maintenance of healthcare applications in the state of Rajasthan in collaboration with the National Health Mission. The primary role will be the review, maintenance and development of existing applications and its integration with back-end services, Review and understand business requirements, working with cross-functional teams, Develop and enhance user-facing features in accordance with design and consistent with business objectives, Design, build, and maintain high performance, reusable, and reliable code. Ensure the best possible performance, quality, and responsiveness of the application Identify and correct bottlenecks and fix bugs Help maintain code quality, organization, and automatization. Responsibilities Monitor the apps technical life-cycle during each phase of development. Design, develop and maintain high quality and general reliable codes. Maintain and update the design specifications and source code for new applications. Collaborate with the technical team to improve application performance features. Test the applications, identify the bugs and take measures to resolve them. Collaborate with cross-functional teams to define, design, and ship new features. Evaluate the existing applications and implement new technologies to maximize app’s efficiency. Unit-test code for robustness, including edge cases, usability, and general reliability. Work on bug fixing and improving application performance. Continuously discover, evaluate, and implement new technologies to maximize development efficiency. Support DIU team in deployment, development, maintenance of eRCH and Prasav watch applications in other programmatic states. Work with NIC/DOIT or state digital teams to support the digital work in the respective states. Any other responsibility as assigned by the supervisor. Required Qualifications Key skills: Strong design, development experience using c#, .net core, Web API, angular, asp.net mvc Experience with ASP.NET Web API / Core, ADO.NET, Entity Framework / Core; Experience with multithreading (thread-safe collections, synchronization); Knowledge of AWS, basic deployment and production tools (CICD) Good understanding of Agile concepts; Understanding the process of commercial software development, program life-cycle stages Proficient understanding of code versioning tools, such as Git Familiarity with continuous integration. Working knowledge of the general mobile landscape, architectures, trends, and emerging technologies Fully responsible for backend development Implement new technologies to maximize application performance. Experience in recent UI/UX/Server/Service technologies (e.g., Angular and SOAP). Qualifications / Experience / Knowledge Master’s degree in information technology/Computers Application (MCA) (essential), computer science or equivalent with 5+ years of experience is required. Proficiency in github, information technology and application structure, problem solving, android development, technology infrastructure. Experience working with remote data via REST and JSON Experience with third-party libraries and APIs Jhpiego is an equal opportunity employer and offers highly dynamic and enabling work environment. Jhpiego offers competitive salaries and a comprehensive employee benefits package. Women candidates are encouraged to apply. Due to high volume of applications, only shortlisted applicants will receive a response from Jhpiego HR. RECRUITMENT SCAMS & FRAUD WARNING Jhpiego has become aware of scams involving false job offers. Please be advised: Recruiters will never ask for a fee during any stage of the recruitment process. All active jobs are advertised directly on our careers page. Official Jhpiego emails will always arrive from a @Jhpiego.org email address. Please report any suspicious communications to Info@jhpiego.org
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France