Jobs
Interviews

5906 Retrieval Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

SQL DEVELOPER: Design and implement relational database structures optimized for performance and scalability. Develop and maintain complex SQL queries, stored procedures, triggers, and functions. Optimize database performance through indexing, query tuning, and regular maintenance. Ensure data integrity, consistency, and security across multiple environments. Collaborate with cross-functional teams to integrate SQL databases with applications and reporting tools. Develop and manage ETL (Extract, Transform, Load) processes for data ingestion and transformation. Document database architecture, processes, and procedures for future reference. Stay updated with the latest SQL best practices and database technologies.Data Retrieval: SQL Developers must be able to query large and complex databases to extract relevant data for analysis or reporting. Data Transformation: They often clean, join, and reshape data using SQL to prepare it for downstream processes like analytics or machine learning. Performance Optimization: Writing queries that run efficiently is key, especially when dealing with big data or real-time systems. Understanding of Database Schemas: Knowing how tables relate and how to navigate normalized or denormalized structures is essential. Minimum Experience- 5 Years Mandatory Skills- SQL, ETL, Data Retrieval, Data Transformation, Performance Optimization, Machine Learning, Schema, Data Integration, PyTest, Custom Python/SQL scripts, Python or another scripting language for test automation, Cloud Platform, Data Warehousing solutions. Share your resume at aarushi.shukla@coforge.com if you are a core SQL Developer and an immediate joiner.

Posted 6 days ago

Apply

7.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary We are seeking a highly skilled Sr. Developer with 7 to 10 years of experience to join our dynamic team. The ideal candidate will have expertise in Python Databricks SQL Databricks Workflows and PySpark. This role operates in a hybrid work model with day shifts offering the opportunity to work on innovative projects that drive our companys success. Responsibilities Develop and maintain scalable data processing systems using Python and PySpark to enhance data analytics capabilities. Collaborate with cross-functional teams to design and implement Databricks Workflows that streamline data operations. Optimize Databricks SQL queries to improve data retrieval performance and ensure efficient data management. Provide technical expertise in Python programming to support the development of robust data solutions. Oversee the integration of data sources into Databricks environments to facilitate seamless data processing. Ensure data quality and integrity by implementing best practices in data validation and error handling. Troubleshoot and resolve complex technical issues related to Databricks and PySpark environments. Contribute to the continuous improvement of data processing frameworks and methodologies. Mentor junior developers and provide guidance on best practices in data engineering. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Conduct code reviews to ensure adherence to coding standards and best practices. Stay updated with the latest industry trends and technologies to drive innovation in data engineering. Document technical processes and workflows to support knowledge sharing and team collaboration. Qualifications Possess a strong proficiency in Python programming and its application in data engineering. Demonstrate expertise in Databricks SQL and its use in optimizing data queries. Have hands-on experience with Databricks Workflows for efficient data processing. Show proficiency in PySpark for developing scalable data solutions. Exhibit excellent problem-solving skills and the ability to troubleshoot complex technical issues. Have a solid understanding of data integration techniques and best practices. Display strong communication skills to collaborate effectively with cross-functional teams. Certifications Required Databricks Certified Data Engineer Associate Python Institute PCEP Certification

Posted 6 days ago

Apply

3.0 years

0 Lacs

India

Remote

Remote Job Role : Full Stack Software Engineer with AI Location : Indian (Remote) We are seeking an innovative Full Stack Engineer to AI inclusive applications. These descriptions build upon your existing requirements, integrating AI-specific responsibilities, skills, and qualifications. Join our team, dedicated to developing cutting-edge applications leveraging Retrieval-Augmented Generation (RAG) and other AI technologies. You will build end-to-end Title: Full Stack Engineer (AI/RAG Applications) Key Responsibilities • Design, build, and maintain scalable full-stack solutions, integrating sophisticated AI models and advanced data retrieval mechanisms into intuitive, scalable, and responsive applications. Applications integrating Retrieval-Augmented Generation (RAG)-based AI solutions. • Develop responsive, intuitive user interfaces leveraging modern JavaScript frameworks (React, Angular • Design, build, and maintain AI-driven full-stack applications leveraging Retrieval-Augmented Generation (RAG, Vue) to deliver seamless AI-driven user experiences. • Build robust backend APIs and microservices that interface with AI models,) and related AI technologies vector databases, and retrieval engines. • Integrate Large Language Models (LLMs), embeddings, vector databases, and search algorithms. • Collaborate closely with AI/ML specialists, product owners, and UX designers to translate complex AI capabilities into user-friendly interfaces. • Implement robust into applications. • Collaborate closely with data scientists, machine learning engineers, and product teams to define APIs and backend services to requirements, optimize AI integration, and deliver innovative features support RAG model integrations and real-time data retrieval. • Develop responsive front-end interfaces utilizing modern frameworks. • Create and manage RESTful and GraphQL APIs to facilitate efficient, secure data exchange between frontend (React, Angular, Vue) that seamlessly interact with AI backend services. • Ensure robust security measures, scalability, and performance components, backend services, and AI engines. • Participate actively in code reviews, architecture decisions, and Agile ceremonies, ensuring best practices in software engineering and optimization of AI-integrated applications. • Participate actively in code reviews, technical design discussions, and agile ceremonies. • Continuously AI integration. • Troubleshoot, debug, and enhance performance of both frontend and backend systems, focusing on AI latency, accuracy explore emerging AI trends and technologies, proactively recommending improvements, and scalability. • to enhance product capabilities. Qualifications • Bachelor’s degree in computer science, Engineering, or a related technical discipline. • 3+ years of experience in full-stack software engineering, with demonstrated, Engineering, or related discipline. • Minimum of 3+ years of experience in full-stack software development. • Proficiency experience integrating AI/ML services. • Strong proficiency in front-end technologies including HTML, CSS, JavaScript, and frameworks such as React, Angular, or Vue.js. • in frontend technologies (HTML, CSS, JavaScript Backend development expertise with Node, React, Angular, Vue). • Strong backend.js, Python, Java, or .NET, particularly experience building RESTful APIs and microservices. skills in Node.js, Python, Java, or .NET. • Hands-on experience integrating AI/ML models, particularly NLP- Experience integrating AI and NLP models, including familiarity with Retrieval-Augmented Generation (RAG), OpenAI APIs, LangChain, or similar frameworks. • Proficiency with relational-based Large Language Models (e.g., GPT, BERT, LLaMA). • Familiarity with RAG architectures, vector databases (e.g., Pinecone, We and NoSQL databases (PostgreSQL, MongoDB, etc.) and familiarity with vector databases (aviate, Milvus), and embedding techniques. • Experience with RESTful APIs, GraphQL, microservices, and cloud-native architecture (AWS, Azure, GCP). e.g., Pinecone, Chroma, Weaviate) is a plus. • Solid understanding- Solid understanding of databases (SQL/NoSQL) and data modeling best practices. • Experience with version control systems (Git), CI/CD pipelines, and containerization (Docker of version control (Git) and CI/CD pipelines. Thanks, and Regards Saurabh Kumar | Lead Recruiter saurabh.yadav@ampstek.com | www.ampstek.com https://www.linkedin.com/in/saurabh-kumar-yadav-518927a8/ Call to : +1 609-360-2671

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

At SiteMinder we believe the individual contributions of our employees are what drive our success. That’s why we hire and encourage diverse teams that include and respect a variety of voices, identities, backgrounds, experiences and perspectives. Our diverse and inclusive culture enables our employees to bring their unique selves to work and be proud of doing so. It’s in our differences that we will keep revolutionising the way for our customers. We are better together! What We Do… We’re people who love technology but know that hoteliers just want things to be simple. So since 2006 we’ve been constantly innovating our world-leading hotel commerce platform to help accommodation owners find and book more guests online - quickly and simply. We’ve helped everyone from boutique hotels to big chains, enabling travellers to book igloos, cabins, castles, holiday parks, campsites, pubs, resorts, Airbnbs, and everything in between. And today, we’re the world’s leading open hotel commerce platform, supporting 47,000 hotels in 150 countries - with over 125 million reservations processed by SiteMinder’s technology every year. About The Engineering Director Role… As the Engineering Director, Data Engineering , you will be responsible for establishing and leading SiteMinder’s data engineering function in Pune. You will play a critical role in setting up a world-class development team, driving best practices in data strategy, architecture and governance. Your work will ensure the scalability, availability and security of our data platform. You will build and grow new data engineering teams in Pune , hiring top talent and fostering a culture of innovation, collaboration and technical excellence. You will also work closely with global engineering, product and analytics teams to enable data-driven decision-making at scale. This role will report to VP, Data and require strong engagement with the Chief Technology Officer and Chief Data Officer What You’ll Do… Set up & scale the Engineering team in Pune Lead the recruitment, onboarding and development of a high-performing engineering team, with an initial focus on data and broadening over time to other engineering capabilities. Establish SiteMinder’s engineering presence in Pune, creating a strong technical foundation for future growth. Foster an inclusive, high-trust engineering culture that encourages learning, growth and innovation. Line manage, mentor and support your team to drive performance Promote SiteMinder’s story in Pune Be an advocate and spokesperson at local events, community groups and publications to tell the SiteMinder story and attract the right talent. Execute SiteMinder’s data strategy Work with stakeholders to execute the vision, strategy and roadmap for SiteMinder’s data platform. Lead the team implementing scalable data architectures to support business needs. Deliver high impact data initiatives. Establish best practices for data governance, security, privacy and compliance Enhance data infrastructure & operations Collaborate with architects to build and maintain a modern data platform with scalable pipelines and real-time analytics capabilities. Lead initiatives to improve data quality, reliability and observability. Optimise data storage, processing and retrieval strategies for efficiency and cost- effectiveness. Drive the adoption and optimisation of Databricks for scalable data processing and machine learning workloads. Collaborate Across Global Teams Work closely with global engineering, product and analytics teams to ensure data solutions align with business objectives. Collaborate with the Chief Technology Officer, Chief Data Officer, Principal Data Engineer(s), Chief Engineer, Software Engineers, Engineering Managers and other key engineering roles. Partner with leadership to define KPIs, data policies and governance frameworks. Advocate for a data-driven culture across SiteMinder What You Have… Extensive years of experience in data engineering, with wide experience in leadership roles. Proven track record in building and scaling development teams in India, preferably in a global organisation. Strong experience in hiring, mentoring and leading high-performing teams. Expertise in cloud-based data platforms (AWS) and modern data architectures. Strong hands-on experience with big data technologies (Spark, Kafka, Snowflake, Databricks, etc.). Experience designing and optimising large-scale data processing solutions using Databricks. Deep knowledge of data governance, security and compliance best practices. Experience leading the implementation of data pipelines, ETL frameworks and real- time streaming solutions. Strong stakeholder management and the ability to align technical solutions with business objectives. Passion for driving innovation in data engineering and empowering teams to excel. Our Perks & Benefits… Mental health and well-being initiatives Generous parental (including secondary) leave policy Flexibility to work in a Hybrid model (2-3 days in-office) Paid birthday, study and volunteering leave every year Sponsored social clubs, team events, and celebrations Employee Resource Groups (ERG) to help you connect and get involved Investment in your personal growth offering training for your advancement Does this job sound like you? If yes, we'd love for you to be part of our team! Please send a copy of your resume and our Talent Acquisition team will be in touch. When you apply, please tell us the pronouns you use and any adjustments you may need during the interview process. We encourage people from underrepresented groups to apply.

Posted 6 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Particle41 is seeking a talented and versatile Data Engineer to join our innovative team. As a Data Engineer, you will play a key role in designing, building, and maintaining robust data pipelines and infrastructure to support our clients' data needs. You will work on end-to-end data solutions, collaborating with cross-functional teams to ensure high-quality, scalable, and efficient data delivery. This is an exciting opportunity to contribute to impactful projects, solve complex data challenges, and grow your skills in a supportive and dynamic environment. In This Role, You Will: Software Development Design, develop, and maintain scalable ETL (Extract, Transform, Load) pipelines to process large volumes of data from diverse sources. Build and optimize data storage solutions, such as data lakes and data warehouses, to ensure efficient data retrieval and processing. Integrate structured and unstructured data from various internal and external systems to create a unified view for analysis. Ensure data accuracy, consistency, and completeness through rigorous validation, cleansing, and transformation processes. Maintain comprehensive documentation for data processes, tools, and systems while promoting best practices for efficient workflows. Requirements Gathering and Analysis Collaborate with product managers, and other stakeholders to gather requirements and translate them into technical solutions. Participate in requirement analysis sessions to understand business needs and user requirements. Provide technical insights and recommendations during the requirements-gathering process. Agile Development Participate in Agile development processes, including sprint planning, daily stand-ups, and sprint reviews. Work closely with Agile teams to deliver software solutions on time and within scope. Adapt to changing priorities and requirements in a fast-paced Agile environment. Testing and Debugging Conduct thorough testing and debugging to ensure the reliability, security, and performance of applications. Write unit tests and validate the functionality of developed features and individual elements. Writing integration tests to ensure different elements within a given application function as intended and meet desired requirements. Identify and resolve software defects, code smells, and performance bottlenecks. Continuous Learning and Innovation Stay updated with the latest technologies and trends in full-stack development. Propose innovative solutions to improve the performance, security, scalability, and maintainability of applications. Continuously seek opportunities to optimize and refactor existing codebase for better efficiency. Stay up to date with cloud platforms such as AWS, Azure, or Google Cloud Platform. Collaboration Collaborate effectively with cross-functional teams, including testers, and product managers. Foster a collaborative and inclusive work environment where ideas are shared and valued. Skills and Experience We Value: Bachelor's degree in computer science, Engineering, or related field. Proven experience as a Data Engineer, with a minimum of 3 years of experience. Proficiency in Python programming language. Experience with database technologies such as SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB) databases. Strong understanding of Programming Libraries/Frameworks and technologies such as Flask, API frameworks, datawarehousing/lakehouse, principles, database and ORM, data analysis databricks, panda's, Spark, Pyspark, Machine learning, OpenCV, scikit-learn. Utilities & Tools: logging, requests, subprocess, regex, pytest ELK stack, Redis, distributed task queues Strong understanding of data warehousing/lakehousing principles and concurrent/parallel processing concepts. Familiarity with at least one cloud data engineering stack (Azure, AWS, or GCP) and the ability to quickly learn and adapt to new ETL/ELT tools across various cloud providers. Familiarity with version control systems like Git and collaborative development workflows. Competence in working on Linux OS and creating shell scripts. Solid understanding of software engineering principles, design patterns, and best practices. Excellent problem-solving and analytical skills, with a keen attention to detail. Effective communication skills, both written and verbal, and the ability to collaborate in a team environment. Adaptability and willingness to learn new technologies and tools as needed. About Particle41 Our core values of Empowering, Leadership, Innovation, Teamwork, and Excellence drive everything we do to achieve the ultimate outcomes for our clients. Empowering Leadership for Innovation in Teamwork with Excellence ( ELITE ) E - Empowering: Enabling individuals to reach their full potential L - Leadership: Taking initiative and guiding each other toward success I - Innovation: Embracing creativity and new ideas to stay ahead T - Teamwork: Collaborating with empathy to achieve common goals E - Excellence: Striving for the highest quality in everything we do We seek team members who embody these values and are committed to contributing to our mission. Particle41 welcomes individuals from all backgrounds who are committed to our mission and values. We provide equal employment opportunities to all employees and applicants, ensuring that hiring and employment decisions are based on merit and qualifications without discrimination based on race, color, religion, caste, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, local, or international laws. This policy applies to all aspects of employment and hiring. We appreciate your interest and encourage applicants from these regions to apply. If you need any assistance during the application or interview process, please feel free to reach out to us at careers@Particle41.com.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Intelligent Image Management Inc (IIMI) is an IT Services company reimagines and digitizes data through document automation using modern, cloud-native app development. As one of the world's leading multinational IT services companies with offices in the USA and Singapore, India, Sri Lanka, Bangladesh, Nepal and Kenya. Over 7,000 people are employed by IIMI worldwide whose mission is to advance data process automation. US and European Fortune 500 companies are among our clients. Become part of a team that puts its people first. Founded in 1996, Intelligent Image Management Inc. has always believed in its people. We strive to foster an environment where all feel welcome, supported, and empowered to be innovative and reach their full potential. Website: https://www.iimdirect.com/ About the Role: We are looking for a highly experienced and driven Senior Data Scientist to join our advanced AI and Data Science team. You will play a key role in building and deploying machine learning models—especially in the areas of computer vision, document image processing, and large language models (LLMs) . This role requires a combination of hands-on technical skills and the ability to design scalable ML solutions that solve real-world business problems. Key Responsibilities: Design and develop end-to-end machine learning pipelines, from data preprocessing and feature engineering to model training, evaluation, and deployment. Lead complex ML projects using deep learning, computer vision, and document analysis methods (e.g., object detection, image classification, segmentation, layout analysis). Build solutions for document image processing using tools like Google Cloud Vision, AWS Textract , and OCR libraries. Apply LLMs (Large Language Models), both open-source (e.g., LLaMA, Mistral, Falcon, GPT-NeoX) and closed-source (e.g., OpenAI GPT, Claude, Gemini), to automate text understanding, extraction, summarization, classification, and question-answering tasks. Integrate LLMs into applications for intelligent document processing, NER, semantic search, embeddings, and chat-based interfaces. Use Python (along with libraries such as OpenCV, PyTorch, TensorFlow, Hugging Face Transformers) and for building scalable, multi-threaded data processing pipelines. Implement and maintain ML Ops practices using tools such as MLflow, AWS SageMaker, GCP AI Platform , and containerized deployments. Collaborate with engineering and product teams to embed ML models into scalable production systems. Stay up to date with emerging research and best practices in machine learning, LLMs, and document AI. Required Qualifications: Bachelor’s or master’s degree in computer science, Mathematics, Statistics, Engineering, or a related field. Minimum 5 years of experience in machine learning, data science, or AI engineering roles. Strong background in deep learning, computer vision, and document image processing . Practical experience with LLMs (open and closed source), including fine-tuning, prompt engineering, and inference optimization. Solid grasp of MLOps , model versioning, and model lifecycle management. Expertise in Python , with strong knowledge of ML and CV libraries. Experience with Java and multi-threading is a plus. Familiarity with NLP tasks including Named Entity Recognition , classification, embeddings , and text summarization . Experience with cloud platforms (AWS/GCP) and their ML toolkits Preferred Skills: • Experience with retrieval-augmented generation (RAG), vector databases, and LLM evaluation tools. • Exposure to CI/CD for ML workflows and best practices in production ML. • Ability to mentor junior team members and lead cross-functional AI projects. Work Location: Work from Office Send cover letter, complete resume, and references to email: tech.jobs@iimdirect.com Industry: Outsourcing/Offshoring Employment Type Full-time

Posted 6 days ago

Apply

15.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Drive the Future of Data-Driven Entertainment Are you passionate about working with big data? Do you want to shape the direction of products that impact millions of users daily? If so, we want to connect with you. We’re seeking a leader for our Data Engineering team who will collaborate with Product Managers, Data Scientists, Software Engineers, and ML Engineers to support our AI infrastructure roadmap. In this role, you’ll design and implement the data architecture that guides decision-making and drives insights, directly impacting our platform’s growth and enriching user experiences. As a part of SonyLIV, you’ll work with some of the brightest minds in the industry, access one of the most comprehensive data sets in the world and leverage cutting-edge technology. Your contributions will have a tangible effect on the products we deliver and the viewers we engage. The ideal candidate will bring a strong foundation in data infrastructure and data architecture, a proven record of leading and scaling data teams, operational excellence to enhance efficiency and speed, and a visionary approach to how Data Engineering can drive company success. If you’re ready to make a significant impact in the world of OTT and entertainment, let’s talk. AVP, Data Engineering – SonyLIV Location: Bangalore Responsibilities: Define the Technical Vision for Scalable Data Infrastructure: Establish a robust technical strategy for SonyLIV’s data and analytics platform, architecting a scalable, high-performance data ecosystem using modern technologies like Spark, Kafka, Snowflake, and cloud services (AWS/GCP). Lead Innovation in Data Processing and Architecture: Advance SonyLIV’s data engineering practices by implementing real-time data processing, optimized ETL pipelines, and streaming analytics through tools like Apache Airflow, Spark, and Kubernetes. Enable high-speed data processing to support real-time insights for content and user engagement. Ensure Operational Excellence in Data Systems: Set and enforce standards for data reliability, privacy, and performance. Define SLAs for production data processes, using monitoring tools (Grafana, Prometheus) to maintain system health and quickly resolve issues. Build and Mentor a High-Caliber Data Engineering Team: Recruit and lead a skilled team with strengths in distributed computing, cloud infrastructure, and data security. Foster a collaborative and innovative culture, focused on technical excellence and efficiency. Collaborate with Cross-Functional Teams: Partner closely with Data Scientists, Software Engineers, and Product Managers to deliver scalable data solutions for personalization algorithms, recommendation engines, and content analytics. Architect and Manage Production Data Models and Pipelines: Design and launch production-ready data models and pipelines capable of supporting millions of users. Utilize advanced storage and retrieval solutions like Hive, Presto, and BigQuery to ensure efficient data access. Drive Data Quality and Business Insights: Implement automated quality frameworks to maintain data accuracy and reliability. Oversee the creation of BI dashboards and data visualizations using tools like Tableau and Looker, providing actionable insights into user engagement and content performance. This role offers the opportunity to lead SonyLIV’s data engineering strategy, driving technological innovation and operational excellence while enabling data-driven decisions that shape the future of OTT entertainment. Minimum Qualifications: 15+ years of progressive experience in data engineering, business intelligence, and data warehousing, including significant expertise in high-volume, real-time data environments. Proven track record in building, scaling, and managing large data engineering teams (10+ members), including experience managing managers and guiding teams through complex data challenges. Demonstrated success in designing and implementing scalable data architectures, with hands-on experience using modern data technologies (e.g., Spark, Kafka, Redshift, Snowflake, BigQuery) for data ingestion, transformation, and storage. Advanced proficiency in SQL and experience with at least one object-oriented programming language (Python, Java, or similar) for custom data solutions and pipeline optimization. Strong experience in establishing and enforcing SLAs for data availability, accuracy, and latency, with a focus on data reliability and operational excellence. Extensive knowledge of A/B testing methodologies and statistical analysis, including a solid understanding of the application of these techniques for user engagement and content analytics in OTT environments. Skilled in data governance, data privacy, and compliance, with hands-on experience implementing security protocols and controls within large data ecosystems. Preferred Qualifications: Bachelor's or Master’s degree in Computer Science, Mathematics, Physics, or a related technical field. Experience managing the end-to-end data engineering lifecycle, from model design and data ingestion through to visualization and reporting. Experience working with large-scale infrastructure, including cloud data warehousing, distributed computing, and advanced storage solutions. Familiarity with automated data lineage and data auditing tools to streamline data governance and improve transparency. Expertise with BI and visualization tools (e.g., Tableau, Looker) and advanced processing frameworks (e.g., Hive, Presto) for managing high-volume data sets and delivering insights across the organization. Why join us? CulverMax Entertainment Pvt Ltd (Formerly known as Sony Pictures Networks India) is home to some of India’s leading entertainment channels such as SET, SAB, MAX, PAL, PIX, Sony BBC Earth, Yay!, Sony Marathi, Sony SIX, Sony TEN, SONY TEN1, SONY Ten2, SONY TEN3, SONY TEN4, to name a few! Our foray into the OTT space with one of the most promising streaming platforms, Sony LIV brings us one step closer to being a progressive digitally led content powerhouse. Our independent production venture- Studio Next has already made its mark with original content and IPs for TV and Digital Media. But our quest to Go Beyond doesn’t end there. Neither does our search to find people who can take us there. We focus on creating an inclusive and equitable workplace where we celebrate diversity with our Bring Your Own Self Philosophy. We strive to remain an ‘Employer of Choice’ and have been recognized as: - India’s Best Companies to Work For 2021 by the Great Place to Work® Institute. - 100 Best Companies for Women in India by AVTAR & Seramount for 6 years in a row - UN Women Empowerment Principles Award 2022 for Gender Responsive Marketplace and Community Engagement & Partnership - ET Human Capital Awards 2023 for Excellence in HR Business Partnership & Team Building Engagement - ET Future Skills Awards 2022 for Best Learning Culture in an Organization and Best D&I Learning Initiative. The biggest award of course is the thrill our employees feel when they can Tell Stories Beyond the Ordinary!

Posted 6 days ago

Apply

50.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Us: Headquartered in Dublin, Ohio, Cardinal Health, Inc. (NYSE: CAH) is a distributor of pharmaceuticals, a global manufacturer and distributor of medical and laboratory products, and a provider of performance and data solutions for healthcare facilities. We are a crucial link between the clinical and operational sides of healthcare, delivering end-to-end solutions and data-driving insights that advance healthcare and improve lives every day. With deep partnerships, diverse perspectives and innovative digital solutions, we build connections across the continuum of care. With more than 50 years of experience, we seize the opportunity to address healthcare's most complicated challenges — now, and in the future. With approximately 48,000 employees across several countries and Fiscal 2023 revenues of $205 billion, Cardinal Health ranks among the top 15 on the Fortune 500. In Bangalore we have created an Innovation and Global Capability Centre (GCC) in 2021 as part of our Global Business Services (GBS) operating model that allows us to in-house talent and scale that talent across our business in areas such as Enterprise IT, Commercial Technologies and Business Process Solutions. Our ambition is to build differentiated opportunities that allows our organization to advance rapidly to be healthcare’s most trusted partner. What Data Science contributes to Cardinal Health The Data & Analytics Function oversees the analytics lifecycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytic data platforms, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling. Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extract knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems. Qualifications 2-4 years of experience, preferred Bachelor's degree in related field, or equivalent work experience, preferred Technical support and enhancements for GenAI pipelines with RAG Assist with prompt engineering to improve performance Monitor system performance, logs, retrieval quality and prompt-effectiveness Provide on-call support during incidents What is expected of you and others at this level Applies working knowledge in the application of concepts, principles and technical capabilities to perform varied tasks Works on projects of moderate scope and complexity Identifies possible solutions to a variety of technical problems and takes action to resolve. Apply judgment within defined parameters Receives general guidance and may receive more detailed instruction on new projects Work reviewed for sound reasoning and accuracy

Posted 6 days ago

Apply

0 years

0 Lacs

Sangareddi, Telangana, India

On-site

Job Description 💰 Compensation Note: The budget for this role is fixed at INR 50–55 lakhs per annum (non-negotiable). Please ensure this aligns with your expectations before applying. 📍 Work Setup: This is a hybrid role , requiring 3 days per week onsite at the office in Hyderabad, India . Company Description: Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. Job Description : We are looking for an AI Engineer with experience in Speech-to-text and Text Generation to solve a Conversational AI challenge for our client based in EMEA. The focus of this project is to transcribe conversations and leverage generative AI-powered text analytics to drive better engagement strategies and decision-making. The ideal candidate will have deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), Large Language Models (LLMs), and Conversational AI systems. This role involves working on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key Responsibilities: Conversational AI & Call Transcription Development Develop and fine-tune automatic speech recognition (ASR) models Implement language model fine-tuning for industry-specific language. Develop speaker diarization techniques to distinguish speakers in multi-speaker conversations. NLP & Generative AI Applications Build summarization models to extract key insights from conversations. Implement Named Entity Recognition (NER) to identify key topics. Apply LLMs for conversation analytics and context-aware recommendations. Design custom RAG (Retrieval-Augmented Generation) pipelines to enrich call summaries with external knowledge. Sentiment Analysis & Decision Support Develop sentiment and intent classification models. Create predictive models that suggest next-best actions based on call content, engagement levels, and historical data. AI Deployment & Scalability Deploy AI models using tools like AWS, GCP, Azure AI, ensuring scalability and real-time processing. Optimize inference pipelines using ONNX, TensorRT, or Triton for cost-effective model serving. Implement MLOps workflows to continuously improve model performance with new call data. Qualifications: Technical Skills Strong experience in Speech-to-Text (ASR), NLP, and Conversational AI. Hands-on expertise with tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text. Proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers. Experience with LLM fine-tuning, RAG-based architectures, and LangChain. Hands-on experience with Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB) for knowledge retrieval. Experience deploying AI models using Docker, Kubernetes, FastAPI, Flask. Soft Skills Ability to translate AI insights into business impact. Strong problem-solving skills and ability to work in a fast-paced AI-first environment. Excellent communication skills to collaborate with cross-functional teams, including data scientists, engineers, and client stakeholders. Preferred Qualifications Experience in healthcare, pharma, or life sciences NLP use cases. Background in knowledge graphs, prompt engineering, and multimodal AI. Experience with Reinforcement Learning (RLHF) for improving conversation models.

Posted 6 days ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join us at Seismic, a cutting-edge technology company leading the way in the SaaS industry. We specialize in delivering modern, scalable, and multi-cloud solutions that empower businesses to succeed in today's digital era. Leveraging the latest advancements in technology, including Generative AI, we are committed to driving innovation and transforming the way businesses operate. As we embark on an exciting journey of growth and expansion, we are seeking top engineering talent to join our AI team in Hyderabad, India. As an Engineer II, you will play a crucial role in developing and optimizing backend systems that power our web application, including content discovery, knowledge management, learning and coaching, meeting intelligence and various AI capabilities. You will collaborate with cross-functional teams to design, build, and maintain scalable, high-performance systems that deliver exceptional value to our customers. This position offers a unique opportunity to make a significant impact on our company's growth and success by contributing to the technical excellence and innovation of our software solutions. If you are a passionate technologist with a strong track record of building AI products, and you thrive in a fast-paced, innovative environment, we want to hear from you! Seismic AI AI is one of the fastest growing product areas in Seismic. We believe that AI, particularly Generative AI, will empower and transform how Enterprise sales and marketing organizations operate and interact with customers. Seismic Aura, our leading AI engine, is powering this change in the sales enablement space and is being infused across the Seismic enablement cloud. Our focus is to leverage AI across the Seismic platform to make our customers more productive and efficient in their day-to-day tasks, and to drive more successful sales outcomes. Why Join Us Opportunity to be a key technical leader in a rapidly growing company and drive innovation in the SaaS industry. Work with cutting-edge technologies and be at the forefront of AI advancements. Competitive compensation package, including salary, bonus, and equity options. A supportive, inclusive work culture. Professional development opportunities and career growth potential in a dynamic and collaborative environment. At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Distributed Systems Development: Design, develop, and maintain backend systems and services for AI, information extraction or information retrieval functionality, ensuring high performance, scalability, and reliability. Integration: Collaborate with data scientists, AI engineers, and product teams to integrate AI-driven capabilities across the Seismic platform. Performance Tuning: Monitor and optimize service performance, addressing bottlenecks and ensuring low-latency query responses. Technical Leadership: Provide technical guidance and mentorship to junior engineers, promoting best practices in software backend development. Collaboration: Work closely with cross-functional and geographically distributed teams, including product managers, frontend engineers, and UX designers, to deliver seamless and intuitive experiences. Continuous Improvement: Stay updated with the latest trends and advancements in software and technologies, conducting research and experimentation to drive innovation. Experience: 2+ years of experience in software engineering and a proven track record of building and scaling microservices and working with data retrieval systems. Technical Expertise: Experience with C# and .NET, unit testing, object-oriented programming, and relational databases. Experience with Infrastructure as Code (Terraform, Pulumi, etc.), event driven architectures with tools like Kafka, feature management (Launch Darkly) is good to have. Front-end/full stack experience a plus. Cloud Expertise: Experience with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure. Knowledge of cloud-native services for AI/ML, data storage, and processing. Experience deploying containerized applications into Kubernetes is a plus. AI: Proficiency in building and deploying Generative AI use cases is a plus. Experience with Natural Language Processing (NLP). Semantic search with platforms like ElasticSearch is a plus. SaaS Knowledge: Extensive experience in SaaS application development and cloud technologies, with a deep understanding of modern distributed systems and cloud operational infrastructure. Product Development: Experience in collaborating with product management and design, with the ability to translate business requirements into technical solutions that drive successful delivery. Proven record of driving feature development from concept to launch. Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Fast-paced Environment: Experience working in a fast-paced, dynamic environment, preferably in a SaaS or technology-driven company. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.

Posted 6 days ago

Apply

0 years

0 Lacs

Ghatkesar, Telangana, India

On-site

Job Description 💰 Compensation Note: The budget for this role is fixed at INR 50–55 lakhs per annum (non-negotiable). Please ensure this aligns with your expectations before applying. 📍 Work Setup: This is a hybrid role , requiring 3 days per week onsite at the office in Hyderabad, India . Company Description: Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. Job Description : We are looking for an AI Engineer with experience in Speech-to-text and Text Generation to solve a Conversational AI challenge for our client based in EMEA. The focus of this project is to transcribe conversations and leverage generative AI-powered text analytics to drive better engagement strategies and decision-making. The ideal candidate will have deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), Large Language Models (LLMs), and Conversational AI systems. This role involves working on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key Responsibilities: Conversational AI & Call Transcription Development Develop and fine-tune automatic speech recognition (ASR) models Implement language model fine-tuning for industry-specific language. Develop speaker diarization techniques to distinguish speakers in multi-speaker conversations. NLP & Generative AI Applications Build summarization models to extract key insights from conversations. Implement Named Entity Recognition (NER) to identify key topics. Apply LLMs for conversation analytics and context-aware recommendations. Design custom RAG (Retrieval-Augmented Generation) pipelines to enrich call summaries with external knowledge. Sentiment Analysis & Decision Support Develop sentiment and intent classification models. Create predictive models that suggest next-best actions based on call content, engagement levels, and historical data. AI Deployment & Scalability Deploy AI models using tools like AWS, GCP, Azure AI, ensuring scalability and real-time processing. Optimize inference pipelines using ONNX, TensorRT, or Triton for cost-effective model serving. Implement MLOps workflows to continuously improve model performance with new call data. Qualifications: Technical Skills Strong experience in Speech-to-Text (ASR), NLP, and Conversational AI. Hands-on expertise with tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text. Proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers. Experience with LLM fine-tuning, RAG-based architectures, and LangChain. Hands-on experience with Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB) for knowledge retrieval. Experience deploying AI models using Docker, Kubernetes, FastAPI, Flask. Soft Skills Ability to translate AI insights into business impact. Strong problem-solving skills and ability to work in a fast-paced AI-first environment. Excellent communication skills to collaborate with cross-functional teams, including data scientists, engineers, and client stakeholders. Preferred Qualifications Experience in healthcare, pharma, or life sciences NLP use cases. Background in knowledge graphs, prompt engineering, and multimodal AI. Experience with Reinforcement Learning (RLHF) for improving conversation models.

Posted 6 days ago

Apply

0 years

0 Lacs

Secunderābād, Telangana, India

On-site

Job Description 💰 Compensation Note: The budget for this role is fixed at INR 50–55 lakhs per annum (non-negotiable). Please ensure this aligns with your expectations before applying. 📍 Work Setup: This is a hybrid role , requiring 3 days per week onsite at the office in Hyderabad, India . Company Description: Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. Job Description : We are looking for an AI Engineer with experience in Speech-to-text and Text Generation to solve a Conversational AI challenge for our client based in EMEA. The focus of this project is to transcribe conversations and leverage generative AI-powered text analytics to drive better engagement strategies and decision-making. The ideal candidate will have deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), Large Language Models (LLMs), and Conversational AI systems. This role involves working on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key Responsibilities: Conversational AI & Call Transcription Development Develop and fine-tune automatic speech recognition (ASR) models Implement language model fine-tuning for industry-specific language. Develop speaker diarization techniques to distinguish speakers in multi-speaker conversations. NLP & Generative AI Applications Build summarization models to extract key insights from conversations. Implement Named Entity Recognition (NER) to identify key topics. Apply LLMs for conversation analytics and context-aware recommendations. Design custom RAG (Retrieval-Augmented Generation) pipelines to enrich call summaries with external knowledge. Sentiment Analysis & Decision Support Develop sentiment and intent classification models. Create predictive models that suggest next-best actions based on call content, engagement levels, and historical data. AI Deployment & Scalability Deploy AI models using tools like AWS, GCP, Azure AI, ensuring scalability and real-time processing. Optimize inference pipelines using ONNX, TensorRT, or Triton for cost-effective model serving. Implement MLOps workflows to continuously improve model performance with new call data. Qualifications: Technical Skills Strong experience in Speech-to-Text (ASR), NLP, and Conversational AI. Hands-on expertise with tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text. Proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers. Experience with LLM fine-tuning, RAG-based architectures, and LangChain. Hands-on experience with Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB) for knowledge retrieval. Experience deploying AI models using Docker, Kubernetes, FastAPI, Flask. Soft Skills Ability to translate AI insights into business impact. Strong problem-solving skills and ability to work in a fast-paced AI-first environment. Excellent communication skills to collaborate with cross-functional teams, including data scientists, engineers, and client stakeholders. Preferred Qualifications Experience in healthcare, pharma, or life sciences NLP use cases. Background in knowledge graphs, prompt engineering, and multimodal AI. Experience with Reinforcement Learning (RLHF) for improving conversation models.

Posted 6 days ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Position: Staff Engineer - Data, Digital Business Role Overview - Role involves leading SonyLIV's data engineering strategy, architecting scalable data infrastructure, driving innovation in data processing, ensuring operational excellence, and fostering a high-performance team to enable data-driven insights for OTT content and user engagement. Location - Mumbai Experience - 8+ years Responsibilities: Define the Technical Vision for Scalable Data Infrastructure: Establish a robust technical strategy for SonyLIV’s data and analytics platform, architecting a scalable, high-performance data ecosystem using modern technologies like Spark, Kafka, Snowflake, and cloud services (AWS/GCP). Lead Innovation in Data Processing and Architecture: Advance SonyLIV’s data engineering practices by implementing real-time data processing, optimized ETL pipelines, and streaming analytics through tools like Apache Airflow, Spark, and Kubernetes. Enable high-speed data processing to support real-time insights for content and user engagement. Ensure Operational Excellence in Data Systems: Set and enforce standards for data reliability, privacy, and performance. Define SLAs for production data processes, using monitoring tools (Grafana, Prometheus) to maintain system health and quickly resolve issues. Build and Mentor a High-Caliber Data Engineering Team: Recruit and lead a skilled team with strengths in distributed computing, cloud infrastructure, and data security. Foster a collaborative and innovative culture, focused on technical excellence and efficiency. Collaborate with Cross-Functional Teams: Partner closely with Data Scientists, Software Engineers, and Product Managers to deliver scalable data solutions for personalization algorithms, recommendation engines, and content analytics. Architect and Manage Production Data Models and Pipelines: Design and launch production-ready data models and pipelines capable of supporting millions of users. Utilize advanced storage and retrieval solutions like Hive, Presto, and BigQuery to ensure efficient data access. Drive Data Quality and Business Insights: Implement automated quality frameworks to maintain data accuracy and reliability. Oversee the creation of BI dashboards and data visualizations using tools like Tableau and Looker, providing actionable insights into user engagement and content performance. This role offers the opportunity to lead SonyLIV’s data engineering strategy, driving technological innovation and operational excellence while enabling data-driven decisions that shape the future of OTT entertainment. Minimum Qualifications: 8+ years of progressive experience in data engineering, business intelligence, and data warehousing, including significant expertise in high-volume, real-time data environments. Proven track record in building, scaling, and managing large data engineering teams (10+ members), including experience managing managers and guiding teams through complex data challenges. Demonstrated success in designing and implementing scalable data architectures, with hands-on experience using modern data technologies (e.g., Spark, Kafka, Redshift, Snowflake, BigQuery) for data ingestion, transformation, and storage. Advanced proficiency in SQL and experience with at least one object-oriented programming language (Python, Java, or similar) for custom data solutions and pipeline optimization. Strong experience in establishing and enforcing SLAs for data availability, accuracy, and latency, with a focus on data reliability and operational excellence. Extensive knowledge of A/B testing methodologies and statistical analysis, including a solid understanding of the application of these techniques for user engagement and content analytics in OTT environments. Skilled in data governance, data privacy, and compliance, with hands-on experience implementing security protocols and controls within large data ecosystems. Preferred Qualifications: Bachelor's or master’s degree in computer science, Mathematics, Physics, or a related technical field. Experience managing the end-to-end data engineering lifecycle, from model design and data ingestion through to visualization and reporting. Experience working with large-scale infrastructure, including cloud data warehousing, distributed computing, and advanced storage solutions. Familiarity with automated data lineage and data auditing tools to streamline data governance and improve transparency. Expertise with BI and visualization tools (e.g., Tableau, Looker) and advanced processing frameworks (e.g., Hive, Presto) for managing high-volume data sets and delivering insights across the organization. Why SPNI? Join Our Team at SonyLIV Drive the Future of Data-Driven Entertainment Are you passionate about working with big data? Do you want to shape the direction of products that impact millions of users daily? If so, we want to connect with you. We’re seeking a leader for our Data Engineering team who will collaborate with Product Managers, Data Scientists, Software Engineers, and ML Engineers to support our AI infrastructure roadmap. In this role, you’ll design and implement the data architecture that guides decision-making and drives insights, directly impacting our platform’s growth and enriching user experiences. As a part of SonyLIV, you’ll work with some of the brightest minds in the industry, access one of the most comprehensive data sets in the world and leverage cutting-edge technology. Your contributions will have a tangible effect on the products we deliver and the viewers we engage. The ideal candidate will bring a strong foundation in data infrastructure and data architecture, a proven record of leading and scaling data teams, operational excellence to enhance efficiency an

Posted 6 days ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Coimbatore, Tamil Nadu

On-site

Company: Dextra Square Pvt Ltd Location: Coimbatore, Tamil Nadu, India Experience: 0-1 year About Dextra Square Pvt Ltd: Dextra Square Pvt Ltd, established in 2016, is a prominent manufacturer and supplier of high-quality wire netting and building materials, operating under our well-known brand, "Just Fence." While headquartered in Bangalore, with significant operations in Chennai, we're expanding our presence and building strong teams across South India. We specialize in a range of products including barbed wire, compound walls, fencing, and various types of mesh, providing comprehensive solutions to our clients. At Dextra Square, we are committed to fostering a supportive and growth-oriented work environment, recognizing that our employees are our greatest asset. We believe in precision, efficiency, and a team-first approach. Job Summary: Dextra Square Pvt Ltd is looking for a meticulous and proactive Account Executive to join our growing team in Coimbatore. This entry-level role is perfect for a recent graduate or someone with up to one year of experience who possesses strong data entry skills and a foundational understanding of Tally. You'll be crucial in maintaining accurate financial records, supporting our accounting operations, and ensuring the smooth flow of financial data. Fluency in Tamil is a mandatory requirement for effective communication within our team and with local vendors. Key Responsibilities: Data Entry: Accurately input financial data into our accounting system, including sales invoices, purchase orders, expense reports, and other financial transactions. Tally Operations: Utilize Tally software for various accounting tasks such as ledger maintenance, bank reconciliation, generating basic financial reports (e.g., trial balance, profit & loss statements), and managing inventory entries. Record Keeping: Maintain organized and up-to-date physical and digital financial records, ensuring easy retrieval and compliance. Reconciliation Support: Assist in reconciling discrepancies in accounts and financial statements. Vendor & Customer Support: Coordinate with vendors and customers regarding payments, invoices, and other financial queries. Documentation: Prepare and process financial documents, including vouchers, receipts, and payment advices. Ad-hoc Tasks: Support the accounting team with other administrative and financial tasks as required. Skills and Qualifications: Education: Minimum a Bachelor's degree in Commerce, Accounting, Finance, or a related field. Experience: 0-1 year of experience in data entry, accounting support, or a similar role. Fresh graduates with relevant project work or internships are welcome to apply. Technical Skills: Proven experience with data entry with high accuracy and speed. Solid working knowledge of Tally (Tally Prime preferred) is essential. Proficiency in Microsoft Excel for basic data management and analysis. Language Proficiency: Fluency in Tamil (both spoken and written) is mandatory. Attention to Detail: Exceptional accuracy and an eye for detail in handling numerical data. Organizational Skills: Strong ability to organize financial documents and manage time effectively. Team Player: Ability to work collaboratively within a team environment. Proactiveness: A keen willingness to learn and take initiative. Job Types: Full-time, Permanent Pay: ₹12,000.00 - ₹20,000.00 per month Benefits: Flexible schedule Leave encashment Paid sick time Paid time off Provident Fund Ability to commute/relocate: Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): We are trying to close the position as soon as possible, will you be able to join immediately? Location: Coimbatore, Tamil Nadu (Preferred) Work Location: In person Expected Start Date: 21/07/2025

Posted 6 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

To manage, store & standardize commissioning related documentation on Autodesk Construction Cloud & CxAlloy platform and support the HO team in various administrative tasks. Qualifications and Experience Bachelor of Engineering (B.E) in any domain Post Graduation / MBA preferred Certified/ Proficient in handling Autodesk Construction Cloud Proficiency in handling MS Excel Aptitude to learn commissioning tools & software like CxAlloy Key Responsibilities of Role Minimum 4 Years of experience 1. Centralized Document Repository Management Create, organize, and maintain a centralized repository for all Testing & Commissioning (T&C) documentation across ACX projects Ensure version control, proper indexing, and secure access protocols for all stored documents. Archive obsolete documents per ACX retention policies and ensure traceability for future reference 2. Autodesk Construction Cloud (ACC) Proficiency & aptitude to learn other commissioning softwares Manage document workflows within ACC, including uploading, tagging, and submitting documents for review Ensure documents are stored in the correct folders (e.g., Commissioning Folder) and follow the ACC submittal process tailored for ACX India projects Collaborate with BCEI and third-party CxA teams to align ACC usage with global and local standards 3. Commissioning Documentation Oversight Understand and implement the ACX version of BCEI Book of Rules for documentation, including naming conventions, cover sheets, and discipline-specific workflows (Electrical, Mechanical, Fire, Plumbing) Manage submittals such as Method of Statement (MoS), Inspection Test Plans (ITP), FAT/FWT scripts, Cx scripts (L2–L5), Energization Plans, QAQC Plans, and calibration certificates Ensure all Cx documentation is reviewed and approved through the designated workflow involving all stakeholders. Assist in filing claims, booking travel tickets, and managing training budgets for the HO Testing & Commissioning team Coordinate with internal stakeholders and external vendors to ensure timely execution of administrative tasks. 5. Compliance & Quality Assurance Ensure all documentation complies with ACX Integrated Management System (IMS) procedures and quality standards Support audits by maintaining accurate records and facilitating document retrieval for review. 6. Communication & Coordination Liaise with consultants, contractors, and internal teams to ensure timely document submissions and approvals Provide updates to stakeholders on document status, revisions, and access protocols. 7. Training & Process Improvement Support onboarding and training of site teams on ACC workflows and documentation standards. Identify gaps in documentation practices and recommend improvements to enhance efficiency and compliance.

Posted 6 days ago

Apply

0.0 - 2.0 years

0 Lacs

Vijay Nagar, Indore, Madhya Pradesh

On-site

Hiring For AI Enginner - Python Developer :- Job Description:- We are seeking a talented Python Developer with hands-on experience in AI chatbot development and familiarity with Model Context Protocol (MCP) to join our AI team. You will be responsible for developing intelligent, context-aware conversational systems that integrate seamlessly with our internal knowledge base and enterprise services. The ideal candidate is technically proficient ,proactive, and capable of translating complex AI interactions into scalable backend solutions. Key Responsibilities 1. Design and develop robust AI chatbots using Python and integrate them with LLM APIs(e.g., OpenAI, Google AI, etc.). 2. Implement and manage Model Context Protocol (MCP) for optimize context injection, session management, and model-aware interactions. 3.Build and maintain secure pipelines for knowledge base access that allow the chatbot to accurately respond to internal queries. 4.Work with internal teams to define and evolve the contextual metadata strategy (roles, user state, query history, etc.). 5.Contribute to internal tooling and framework development for contextual AI applications. Required Skills & Experience 1. 3+ years of professional Python development experience. 2. Proven track record in AI chatbot development, particularly using LLMs. 3. Understanding of Model Context Protocol (MCP) and its role in enhancing AI interactionfidelity and relevance. 4. Strong experience integrating with AI APIs (e.g., OpenAI, Azure OpenAI). 5. Familiarity with Retrieval-Augmented Generation (RAG) pipelines and vector-basedsearch (e.g., Pinecone, Weaviate, FAISS). 6. Experience designing systems that ingest and structure unstructured knowledge (e.g., PDF,Confluence, Google Drive docs). 7. Comfortable working with RESTful APIs, event-driven architectures, and context-awareservices.8.Good understanding of data handling, privacy, and security standards related to enterpriseAI use. Job Location: Indore Joining: Immediate Share resume at talent@jstechalliance.com or can Contact Here :- 0731-3122400 WhatsApp : 8224006397 Job Type: Full-time Application Question(s): Immediate Joiner Have you completed your Bachelor's\Master's Degree? Experience: Python: 3 years (Required) Model Context Protocol (MCP): 3 years (Required) LLM APIs: 3 years (Required) Artificial Intelligence: 2 years (Required) Location: Vijay Nagar, Indore, Madhya Pradesh (Required) Work Location: In person

Posted 6 days ago

Apply

0 years

0 Lacs

India

On-site

Job Description : Must-Have: Strong hands-on experience in Python programming. Proficient in building REST APIs using frameworks such as: Django REST Framework Flask FastAPI Solid understanding and working experience on Microsoft Azure services. Nice-to-Have: Good knowledge of: SQL databases Cosmos DB Azure Blob Storage Experience with containerization tools like: Docker Docker Compose Familiarity with CI/CD processes using tools such as: Azure DevOps Jenkins, GitHub Actions, or other CI/CD platforms Roles and Responsibilities: Design, develop, and deploy scalable REST APIs using Python and modern web frameworks (Django REST/Flask/FastAPI). Integrate APIs with cloud-based backend services hosted on Azure. Work with both relational and NoSQL databases (e.g., SQL, Cosmos DB). Implement secure and efficient data storage and retrieval mechanisms using Azure Blob Storage and other Azure-native components. Build, test, and deploy microservices using Docker and Docker Compose. Participate in end-to-end CI/CD processes to ensure rapid, reliable delivery of features and bug fixes. Collaborate closely with DevOps, QA, and front-end teams to align on architecture, performance, and scalability. Follow best practices in code quality, version control, testing, and documentation. Contribute to technical discussions and architectural decisions for ongoing and upcoming projects.

Posted 6 days ago

Apply

0.0 - 1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary The PE-Accounts Payable role is designed for individuals with 0 to 1 year of experience focusing on invoice processing and payments. The candidate will work from the office in a rotational shift model ensuring timely and accurate financial transactions. Proficiency in MS Word and MS Excel is essential for success in this role. Responsibilities Process invoices accurately and efficiently to ensure timely payments to vendors and suppliers. Verify and reconcile invoice discrepancies to maintain financial accuracy and integrity. Collaborate with internal departments to resolve payment issues and discrepancies. Maintain organized records of all transactions for easy retrieval and audit purposes. Utilize MS Excel to create and manage spreadsheets for tracking payment statuses. Prepare and process electronic transfers and payments in a timely manner. Ensure compliance with company policies and financial regulations during payment processing. Assist in month-end closing activities by providing necessary documentation and reports. Communicate effectively with vendors to address and resolve payment-related inquiries. Monitor accounts to ensure payments are up to date and follow up on outstanding invoices. Support the finance team in preparing financial reports and statements as needed. Participate in continuous improvement initiatives to enhance the efficiency of the accounts payable process. Adapt to rotational shifts to provide consistent support and coverage for the accounts payable function. Qualifications Demonstrate proficiency in MS Word and MS Excel for document and spreadsheet management. Exhibit strong attention to detail and accuracy in processing financial transactions. Possess excellent communication skills for effective interaction with vendors and internal teams. Show ability to work independently and collaboratively in a fast-paced environment. Display strong organizational skills to manage multiple tasks and priorities. Have a basic understanding of financial principles and accounting practices. Be willing to work in a rotational shift model to ensure consistent support for the team. Certifications Required Certification in MS Office Suite or equivalent is preferred.

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 6 days ago

Apply

0 years

3 - 3 Lacs

Cochin

On-site

Overview Join global organization with 82000+ employees around the world, as a AI/Automation Lead role based in IQVIA Kochi & Bangalore. You will be part of IQVIA’s world class technology team and will be involved in design, development, enhanced software programs or cloud applications or proprietary products. We’re offering an exciting opportunity to work on cutting-edge problem-solving using Generative AI and automation technologies. If you're passionate about innovation, thrive in a fast-paced environment, and want to make a tangible impact across teams—this role is for you! Job Overview: What You’ll Do Conduct research and experimentation with state-of-the-art generative AI models, including but not limited to LLMs, diffusion models, and audio/video synthesis models. Explore and evaluate the latest AI technologies to address business use cases, prototype solutions, and present demos Develop and integrate Retrieval-Augmented Generation (RAG) pipelines to enhance the contextual relevance and accuracy of generative model outputs. Apply Prompt Engineering techniques to optimize LLM behavior across diverse tasks and domains. Design and implement scalable, efficient AI-driven solutions tailored to diverse team needs. Collaborate with stakeholders to validate solutions and refine outcomes Design and manage scalable deployment pipelines for AI models using Azure and other cloud platforms. Benchmark and evaluate AI/ML services across major cloud providers (Azure, AWS, GCP) for performance, cost, and scalability. What We’re Looking For Strong problem-solving skills and a creative mindset. Proficiency in Python and experience with AI/ML frameworks (e.g., PyTorch, TensorFlow). A self-starter with a strong passion for AI/ML research, automation, and innovation who constantly learns and applies new AI frameworks and tools. Proven ability to work independently and deliver high-impact results. Excellent communication skills for presenting ideas and solutions clearly. IQVIA is a leading global provider of clinical research services, commercial insights and healthcare intelligence to the life sciences and healthcare industries. We create intelligent connections to accelerate the development and commercialization of innovative medical treatments to help improve patient outcomes and population health worldwide . Learn more at https://jobs.iqvia.com

Posted 6 days ago

Apply

0 years

0 Lacs

Cochin

On-site

We’re offering an exciting opportunity to work on cutting-edge problem-solving using Generative AI and automation technologies. If you're passionate about innovation, thrive in a fast-paced environment, and want to make a tangible impact across teams—this role is for you! What You’ll Do Conduct research and experimentation with state-of-the-art generative AI models, including but not limited to LLMs, diffusion models, and audio/video synthesis models. Explore and evaluate the latest AI technologies to address business use cases, prototype solutions, and present demos Develop and integrate Retrieval-Augmented Generation (RAG) pipelines to enhance the contextual relevance and accuracy of generative model outputs. Apply Prompt Engineering techniques to optimize LLM behavior across diverse tasks and domains. Design and implement scalable, efficient AI-driven solutions tailored to diverse team needs. Collaborate with stakeholders to validate solutions and refine outcomes Design and manage scalable deployment pipelines for AI models using Azure and other cloud platforms. Benchmark and evaluate AI/ML services across major cloud providers (Azure, AWS, GCP) for performance, cost, and scalability. What We’re Looking For Strong problem-solving skills and a creative mindset. Proficiency in Python and experience with AI/ML frameworks (e.g., PyTorch, TensorFlow). A self-starter with a strong passion for AI/ML research, automation, and innovation who constantly learns and applies new AI frameworks and tools. Proven ability to work independently and deliver high-impact results. Excellent communication skills for presenting ideas and solutions clearly. IQVIA is a leading global provider of clinical research services, commercial insights and healthcare intelligence to the life sciences and healthcare industries. We create intelligent connections to accelerate the development and commercialization of innovative medical treatments to help improve patient outcomes and population health worldwide . Learn more at https://jobs.iqvia.com

Posted 6 days ago

Apply

4.0 years

4 - 7 Lacs

Thiruvananthapuram

On-site

We are currently seeking a highly skilled and experienced PHP Developer to join our talented team. As a PHP Developer, you will play a crucial role in designing, developing, and maintaining our web-based applications and websites. Your expertise in PHP programming, along with your in-depth knowledge of GIT, MYSQL, and Redis, will be essential in ensuring the success of our projects. Responsibilities: - Collaborate with the development team to design and implement robust and scalable PHP-based applications. - Utilize GIT for version control, ensuring smooth collaboration and efficient code management. - Work with relational databases, primarily MYSQL, to handle data storage and retrieval effectively. - Implement Redis for caching and improving application performance. - Collaborate with front-end developers, leveraging your basic knowledge of HTML and CSS to integrate the back-end functionalities seamlessly. - Write clean, efficient, and well-documented code that follows best practices and coding standards. - Troubleshoot and resolve issues in existing PHP applications, ensuring optimal performance and functionality. - Stay up-to-date with the latest PHP trends, tools, and technologies to continuously improve development processes. Requirements: - Minimum 4 years of professional experience as a PHP Developer. - Proven expertise in PHP programming, with a strong portfolio showcasing your previous work. - Solid understanding and experience with GIT for version control. - Proficiency in working with MYSQL and handling complex database queries. - Familiarity with Redis for caching and performance optimization. - Basic knowledge of HTML and CSS for seamless collaboration with front-end developers. - Strong problem-solving skills and the ability to work independently or as part of a team. - Excellent communication and collaboration skills to effectively work with cross-functional teams. - Bachelor's degree in Computer Science, Software Engineering, or a related field (preferred but not mandatory). Job Types: Full-time, Permanent Pay: ₹40,000.00 - ₹60,000.00 per month Benefits: Health insurance Leave encashment Paid sick time Provident Fund Work Location: In person

Posted 6 days ago

Apply

0 years

3 - 3 Lacs

Cochin

On-site

We’re offering an exciting opportunity to work on cutting-edge problem-solving using Generative AI and automation technologies. If you're passionate about innovation, thrive in a fast-paced environment, and want to make a tangible impact across teams—this role is for you! What You’ll Do Conduct research and experimentation with state-of-the-art generative AI models, including but not limited to LLMs, diffusion models, and audio/video synthesis models. Explore and evaluate the latest AI technologies to address business use cases, prototype solutions, and present demos Develop and integrate Retrieval-Augmented Generation (RAG) pipelines to enhance the contextual relevance and accuracy of generative model outputs. Apply Prompt Engineering techniques to optimize LLM behavior across diverse tasks and domains. Design and implement scalable, efficient AI-driven solutions tailored to diverse team needs. Collaborate with stakeholders to validate solutions and refine outcomes Design and manage scalable deployment pipelines for AI models using Azure and other cloud platforms. Benchmark and evaluate AI/ML services across major cloud providers (Azure, AWS, GCP) for performance, cost, and scalability. What We’re Looking For Strong problem-solving skills and a creative mindset. Proficiency in Python and experience with AI/ML frameworks (e.g., PyTorch, TensorFlow). A self-starter with a strong passion for AI/ML research, automation, and innovation who constantly learns and applies new AI frameworks and tools. Proven ability to work independently and deliver high-impact results. Excellent communication skills for presenting ideas and solutions clearly. IQVIA is a leading global provider of clinical research services, commercial insights and healthcare intelligence to the life sciences and healthcare industries. We create intelligent connections to accelerate the development and commercialization of innovative medical treatments to help improve patient outcomes and population health worldwide . Learn more at https://jobs.iqvia.com

Posted 6 days ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Project description Information and Document Systems is a global technology change and delivery organization comprising nearly 150 individuals located in Switzerland, Poland, Singapore, United Kingdom and United States. We provide archiving and retrieval solutions to all business divisions focusing on supporting Legal, Regulatory and Operational functions. It has a complex architecture based on C-Mod, Unix, Oracle, Opentext and SAM-FS. Responsibilities Development and Improving application infrastructure, interacting with developers and production support, configuring and improving existing infrastructure, simplifying release process, investigations, research, activities related to programming and coding, taking part in planning and risk assessment, active participation in distributed agile process. Skills Must have 5+ years of working experience Good to have CMOD/IBM Content Manager OnDemand Server and Client Extensive production support, maintenance experience Advanced scripting (Perl, PowerShell, Shell) Scripting experience Java and Unix (nice to have) Excellent communication, coordination and troubleshooting skills Good to have mainframe basic knowledge Nice to have Agile experience Other Languages English: B2 Upper Intermediate Seniority Senior Hyderabad, IN, India Req. VR-105767 System Administration BCM Industry 16/07/2025 Req. VR-105767

Posted 6 days ago

Apply

1.0 - 3.0 years

3 - 7 Lacs

Hyderābād

On-site

About the job Our Team: Sanofi Global Hub (SGH) is an internal Sanofi resource organization based in India and is setup to centralize processes and activities to support Specialty Care, Vaccines, General Medicines, CHC, CMO, and R&D, Data & Digital functions . MedHub strives to be a strategic and functional partner for tactical deliveries to Medical, HEVA, and Commercial organizations in Sanofi, Globally. Main responsibilities: The overall purpose and main responsibilities are listed below: Work closely with HEVA Senior director, business partners, therapy areas leads, eBuy managers, external vendors, and finance colleagues to lead coordination and management of various activities Work with TA Leads to conduct monthly/bimonthly/quarterly budget reviews and ensure full oversight; identify US budget needs; coordinate with cross-functional teams to operationalise strategic plan, brand plan and prioritization; identify areas of support needed; develop and maintain TA project tracker; track and update on monthly worksheet issues flagged Work with business partners to perform monthly review of budget plans and actuals; complete North America (NA) intake form and update budget tracker with SOW details, shift funds on tracker to align with finance; provide updates on pending contracts, identify any challenges and follow-up on invoicing issues; follow-up on year-end cross charges (by November) to make sure they hit the books; coordinate and assist to set-up Ad-board meetings Coordinate with ITA team for organising external meetings and activities such as GRFs, FMVs, tiering, honoraria tables and cost-sheet etc. Coordinate with finance colleagues to communicate any discrepancies between finance trackers and BPs budget tracker, cost centre mistakes and for any amendments as needed Work with vendors on contract support to onboard vendors; ensure final approved SOW is processed via NA Intake form, follow–up on contract and PO, forward PO to vendor; support with contract renewals or amendments; follow-up on PV training; monitor invoices to be processed; schedule meetings and prepare meeting minutes Responsible for project management support to the scientific writer and HEVA ensuring the end-to-end effective project delivery of the designated publication/medical education and HEVA deliverable across all phases Initiate submission (as required), amend submission based on comments (as required). Support the writer with the development of a scope of work; build plan and schedule for agreement with the internal stakeholders Arrange key internal and external stakeholder meetings. Track the delivery of activities (including managing issues and risks) and support follow up Support project specialist in tracking GD requests and ensuring they are executed on time Support project specialist in and maintaining and tracking editorial and QC request for publications and other deliverables. Also make sure stipulated timelines are met Support project specialist in required submission, compliance, and approval activities, and ensure compliance with publication processes and use of publication management tools Support project specialist in the management of the assigned publication or medical education in line with the agreed budget. Support and manage as required external spend tracking (e.g., approvals, purchase orders, and goods received) Support project specialist/HEVA team in fetching articles from Rightfind or relevant scientific databases Support project specialist/HEVA team members in sourcing full texts of paid articles from other sources and managing their procurement processes as per the standard guidelines Support project specialist/HEVA team in downloading and categorisation of booklets and information, respectively, from various congress websites as per the eligibility criteria Support adherence to associated compliance related activities and approvals (with internal stakeholder taking accountability for compliance) Update as required with approval/compliance tools (e.g., PromoMats, NAYA) Support project specialist in managing end to end process through Datavision, Matrix, RightFind, Ebuy, PrismAccess, etc. Support project specialist in collaborating effectively with stakeholders: Scientific communication global and/or local teams/HEVA teams; and medical content enhancement teams People: (1) Work closely with project specialist to maintain effective relationship with the end stakeholders (medical scientific community) within the allocated GBU and product – with an end objective to develop education and communication content as per requirement; (2) Actively support and develop MedHub operations associates; (3) Work closely with project specialist to ensure new technologies are leveraged; (4) Work closely with project specialist to support vendor engagements, advisory boards scientific events activities & external expert contracts; (5) Support in initiating the contracting process and related documents within defined timelines; and (6) Collaborate with global stakeholders for project planning and setting up the timelines and maintaining budget Performance: (1) Ensure publication/medical education materials (slide decks, abstracts, posters, manuscripts, newsletters, pub alert, etc.) are delivered, stored as per agreed timelines and quality; (2) Develop tools, technology, and process to constantly improve quality and productivity; (3) Support MedHub HEVA team in timely review and audit of all DataVision entries; (4) Support MedHub HEVA team in all operations related projects; (5) Perform quality check for HEVA documents; (6) Maintain HEVA Smartsheet/projects trackers as needed and make sure all entries are up to date for all projects and; (7) Support global HEVA team to maintain trackers and facilitate retrieval of required information for business reviews as needed Process: (1) Work closely with project specialist to support delivery of projects in terms of resourcing, coordination, quality, timeliness, efficiency, and high technical standards for deliveries made by the medical writing group, including scientific documents and clinical/medical reports; (2) Contribute to overall quality enhancement by ensuring high scientific standards for the output produced by the medical writing group; and (3) Secure adherence to compliance procedures and internal/operational risk controls in accordance with any and all applicable regulatory standards Stakeholder: Work closely with scientific communication/medical content enhancement/HEVA teams to ensure the end-to-end effective project delivery of the designated publication/medical education deliverables About you Experience : Medical communication/pharma experience desirable. Project management experience required. 1-3 years post qualification experience. Project management/ medical communication/pharma experience desirable. Soft skills : Stakeholder management; writing/communication skills; external engagement and ability to work independently and within a team environment Technical skills : As applicable (including but not limited to therapeutic area/domain knowledge exposure; publication submission; and/or project management) Education : Advanced degree in life sciences/pharmacy/similar discipline or medical degree Languages : Excellent knowledge of English language (spoken and written) Pursue Progress, discover Extraordinary Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people. At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com!

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies