Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
1 - 1 Lacs
Gaya
On-site
As a Video Editor, you will be responsible for editing and assembling raw footage into polished, professional videos. You’ll work closely with our creative team to ensure videos meet brand guidelines, storytelling standards, and delivery timelines. Key Responsibilities: ✅ Edit and assemble raw footage into engaging and visually appealing videos ✅ Work primarily with Adobe Premiere Pro (and other Adobe Creative Suite tools if required) ✅ Add sound effects, background music, text animations, motion graphics, and color correction ✅ Collaborate with the creative team to develop new concepts and maintain a consistent visual style ✅ Organize and manage video files efficiently for easy retrieval and archiving ✅ Stay updated with the latest editing trends and techniques to improve video quality Job Types: Full-time, Fresher, Internship Pay: ₹10,000.00 - ₹15,000.00 per month Work Location: In person
Posted 5 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Summary: We are seeking a skilled AutoCAD Designer with experience in embroidery design to join our creative team. The candidate will be responsible for preparing accurate and detailed embroidery layouts using AutoCAD, ensuring alignment with design concepts, production requirements, and fabric specifications. Key Responsibilities: � � Develop intricate embroidery designs using AutoCAD software. � � Create accurate garment patterns and layouts for production. � � Collaborate with designers to translate hand-drawn concepts into digital formats. � � Ensure design feasibility for production, considering fabric types and embroidery techniques. � � Modify and refine designs based on feedback and production requirements. � � Maintain organized digital files and records for easy retrieval. What We’re Looking For: ✔ ️ Proficiency in AutoCAD with experience in embroidery and garment design. ✔ ️ Strong attention to detail and accuracy in pattern-making. ✔ ️ Ability to work efficiently under deadlines and adapt to revisions.
Posted 5 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Hi... We are looking for Data Scientist with Artificail Intelligence & machine Learning Work Location: Only Hyderabad Exp Range: 4 to 8 Yrs Design and build intelligent agents ( RAG , task agents, decision bots) for use in credit, customer service, or analytics workflows. – Finance Domain Deploy and manage AI models in production using AWS AI/ML services (SageMaker, Lambda, Bedrock, etc.). Work with Python and SQL to preprocess, transform, and analyze large volumes of structured and semi-structured data. Collaborate with data scientists, data engineers, and business stakeholders to convert ML prototypes into scalable services. Automate the lifecycle of AI/ML solutions using MLOps practices (model versioning, CI/CD, model monitoring). Leverage vector databases (like Pinecone or OpenSearch) and foundation models to build conversational or retrieval-based solutions. Ensure proper governance, logging, and testing of AI solutions in line with RBI and internal guidelines.
Posted 5 days ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments Data Engineering Role Summary: Skilled Data Engineer with strong Python programming skills and experience in building scalable data pipelines across cloud environments. The candidate should have a good understanding of ML pipelines and basic exposure to GenAI solutioning. This role will support large-scale AI/ML and GenAI initiatives by ensuring high-quality, contextual, and real-time data availability. ________________________________________ Key Responsibilities: Design, build, and maintain robust, scalable ETL/ELT data pipelines in AWS/Azure environments. Develop and optimize data workflows using PySpark, SQL, and Airflow. Work closely with AI/ML teams to support training pipelines and GenAI solution deployments. Integrate data with vector databases like ChromaDB or Pinecone for RAG-based pipelines. Collaborate with solution architects and GenAI leads to ensure reliable, real-time data availability for agentic AI and automation solutions. Support data quality, validation, and profiling processes. ________________________________________ Key Skills & Technology Areas: Programming & Data Processing: Python (4–6 years), PySpark, Pandas, NumPy Data Engineering & Pipelines: Apache Airflow, AWS Glue, Azure Data Factory, Databricks Cloud Platforms: AWS (S3, Lambda, Glue), Azure (ADF, Synapse), GCP (optional) Databases: SQL/NoSQL, Postgres, DynamoDB, Vector databases (ChromaDB, Pinecone) – preferred ML/GenAI Exposure (basic): Hands-on with Pandas, scikit-learn, knowledge of RAG pipelines and GenAI concepts Data Modeling: Star/Snowflake schema, data normalization, dimensional modeling Version Control & CI/CD: Git, Jenkins, or similar tools for pipeline deployment ________________________________________ Other Requirements: Strong problem-solving and analytical skills Flexible to work on fast-paced and cross-functional priorities Experience collaborating with AI/ML or GenAI teams is a plus Good communication and a collaborative, team-first mindset Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. Skills ETL,BIGDATA,PYSPARK,SQL
Posted 5 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Hello, FCM part of FTCG is one of the world’s largest travel management companies and a trusted partner for nationals and multinational companies. With a 24/7 reach in 97 countries, FCM’s flexible technology anticipates and solves client needs, supported by experts who provide in-depth local knowledge and duty of care as part of the ultimate personalised business travel experience. As part of the ASX-listed Flight Centre Travel Group, FCM delivers the best market-wide rates, unique added-value benefits, and exclusive solutions. Winner of the World's Leading Travel Management Company Award at the WTM for nine consecutive years (2019-2011), FCM is constantly transforming the business of travel through its empowered and accountable people who deliver 24/7 service and are available online and offline. FCM has won the coveted Great Place to Work certification for the fifth time ! FCM Travel India is one of India’s Top 100 Great Mid-size Workplaces 2024 and the Best in Professional Services. A leader in the travel tech space, FCM has proprietary client solutions. FCM provides specialist services via FCM Consulting and FCM Meetings & Events. Key Responsibilities Design and develop AI solutions that address real-world business challenges, ensuring alignment with strategic objectives and measurable outcomes. Work with large-scale structured and unstructured datasets, leveraging modern data frameworks, tools, and platforms. Establish and maintain robust standards for data security, privacy, and regulatory compliance across all AI and data workflows. Collaborate closely with cross-functional teams to gather requirements, share insights, and deliver high-impact solutions. Monitor and maintain production AI systems to ensure continued accuracy, scalability, and reliability over time. Stay up to date with the latest advancements in AI, machine learning, and data engineering, and apply them where relevant. Write clean, well-documented, and maintainable code, and actively contribute to team best practices and technical documentation. You'll Be Perfect For The Role If You Have Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field Strong programming skills in Python (preferred) and experience with AI/ML libraries such as TensorFlow, PyTorch, scikit-learn, or Hugging Face Experience designing and deploying machine learning models and AI systems in production environments Familiarity with modern data platforms and cloud services (e.g., Azure, AWS, GCP), including AutoML and MLflow Proficiency with data processing tools and frameworks (e.g., Spark, Pandas, SQL) and working with both structured and unstructured data Experience with Generative AI technologies, including prompt engineering, vector databases, and RAG (Retrieval-Augmented Generation) pipelines Solid understanding of data security, privacy, and compliance principles, with experience implementing these in real-world projects Strong problem-solving skills and ability to translate complex business problems into technical solutions Excellent communication and collaboration skills, with the ability to work effectively across technical and non-technical teams Experience with version control (e.g., Git) and agile development practices Enthusiasm for learning and applying emerging technologies in AI and machine learning Work Perks! - What’s in it for you: FCTG is renowned internationally for having amazing perks and an even better culture. We understand that our people are our most valuable asset. It is the passion and dedication of our teams that keep the company on top of the industry ladder. It’s also why we offer some great employee benefits and perks outside of the norm. You will be rewarded with competitive market salary. You will also be equipped with relevant training courses and tools to set you up for success with endless career advancement and job opportunities all over the world. Market Aligned remuneration structure and a highly competitive salary Fun and Energetic culture : At the heart of everything we do at FCM is a desire to have fun and be yourself Work life Balance : We believe in “No Leave = No Life” So have your own travel adventures with paid annual leave Great place to work - Recognized as a top workplace for 5 consecutive years, which is a testimonial of our commitment towards our people Wellbeing Focus - We take care of our employee with comprehensive medical coverage, accidental insurance, and term insurance for the well being of our people. Paternity Leave: We ensure that you can spend quality time with your growing family Travel perks : You'll have access to plenty of industry discounts to ensure you continue to broaden your horizons A career, not a job : We believe in our people brightness of future. As a high growth company, you will have the opportunity to advance your career in any direction you choose whether that is locally or globally. Reward & Recognition : Celebrate the success of yourself and others at our regular Buzz Nights and at the annual Global Gathering - You'll have to experience it to believe it! Love for travel : We were founded by people who wanted to travel and want others to do the same. That passion is something you can’t miss in our people or service. We value you... #FCMIN Flight Centre Travel Group is committed to creating an inclusive and diverse workplace that supports your unique identity to create better, safer experiences for everyone. We encourage you to come as you are; to foster inclusivity and collaboration. We celebrate you. Who We Are... Since our beginning, our vision has always been to open up the world for those who want to see. As a global travel retailer, our people come from all different backgrounds, and our connections spread to the far reaches of the globe - 20+ countries and counting! Together, we are a family (we call ourselves Flighties). We offer genuine opportunities for people to grow and evolve. We embrace new experiences, we celebrate the wins, seize all opportunities, and empower all of our people to find their Brightness of Future. We encourage you to DREAM BIG through collaboration and innovation, and make sure you are supported to make incredible ideas a reality. Together, we deliver quality, innovative solutions that delight our customers and achieve our strategic priorities. Irreverence. Ownership. Egalitarianism
Posted 5 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Company - AppTestify Work Location - Remote Experience - 4+ years Key Responsibilities: Build, train, and validate machine learning models for prediction, classification, and clustering to support NBA use cases. Conduct exploratory data analysis (EDA) on both structured and unstructured data to extract actionable insights and identify behavioral drivers. Design and deploy A/B testing frameworks and build pipelines for model evaluation and continuous monitoring. Develop vectorization and embedding pipelines using models like Word2Vec, BERT, to enable semantic understanding and similarity search. Implement Retrieval-Augmented Generation (RAG) workflows to enrich recommendations by integrating internal and external knowledge bases. Collaborate with cross-functional teams (engineering, product, marketing) to deliver data-driven Next Best Action strategies. Present findings and recommendations clearly to technical and non-technical stakeholders. Required Skills & Experience: Strong programming skills in Python, including libraries like pandas, NumPy, and scikit-learn. Practical experience with text vectorization and embedding generation (Word2Vec, BERT, SBERT, etc.). Proficiency in Prompt Engineering and hands-on experience in building RAG pipelines using LangChain, Haystack, or custom frameworks. Familiarity with vector databases (e.g., PostgreSQL with pgvector, FAISS, Pinecone, Weaviate). Expertise in Natural Language Processing (NLP) tasks such as NER, text classification, and topic modeling. Sound understanding of supervised learning, recommendation systems, and classification algorithms. Exposure to cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes) is a plus.
Posted 5 days ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Follow GLP, GDP and Data Integrity practices while working in laboratory To perform sampling of raw material and packaging materials as per SOP. Routine analysis of Raw materials, packaging materials, In-process, finished products, Cleaning swabs and Stability samples. To perform preventive maintenance and calibration of the analytical instruments as per calibration schedule. Preparation of technical documents like SOPs, specifications, COA, STP, Validation protocols/reports, Transfer protocols/reports etc. Archival and retrieval of system documents i.e. Instrument logbooks, Registers etc. Reference samples management. Validation and Verification of analytical methods. Reporting of analytical data and submit for review and release. Maintain hygienic condition in respective department. Column Management: Numbering, Issuance and Usage log maintenance of project specific column. Standard Management: Numbering, Issuance and Usage log maintenance of project specific standards. Review of logbooks. Ensure use of Personal Protective Equipment & attend EHS training & send waste to concerned person & comply EHS requirements. Qualifications Master in Pharmacy or Science About Us In the three decades of its existence, Piramal Group has pursued a twin strategy of both organic and inorganic growth. Driven by its core values, Piramal Group steadfastly pursues inclusive growth, while adhering to ethical and values-driven practices. Equal employment opportunity Piramal Group is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, ethnicity, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetics, or other applicable legally protected characteristics. We base our employment decisions on merit considering qualifications, skills, performance, and achievements. We endeavor to ensure that all applicants and employees receive equal opportunity in personnel matters, including recruitment, selection, training, placement, promotion, demotion, compensation and benefits, transfers, terminations, and working conditions, including reasonable accommodation for qualified individuals with disabilities as well as individuals with needs related to their religious observance or practice. About The Team Piramal Pharma Solutions (PPS) is a Contract Development and Manufacturing Organization (CDMO) offering end-to-end development and manufacturing solutions across the drug life cycle. We serve our customers through a globally integrated network of facilities in North America, Europe, and Asia. This enables us to offer a comprehensive range of services including drug discovery solutions, process & pharmaceutical development services, clinical trial supplies, commercial supply of APIs, and finished dosage forms. We also offer specialized services such as the development and manufacture of highly potent APIs, antibody-drug conjugations, sterile fill/finish, peptide products & services, and potent solid oral drug products. PPS also offers development and manufacturing services for biologics including vaccines, gene therapies, and monoclonal antibodies, made possible through Piramal Pharma Limited’s investment in Yapan Bio Private Limited. Our track record as a trusted service provider with experience across varied technologies makes us a partner of choice for innovators and generic companies worldwide.
Posted 5 days ago
0.0 - 5.0 years
0 Lacs
Mohali, Punjab
On-site
About the Role We are seeking a highly skilled and motivated Senior Java Developer with 5–8 years of experience to join our engineering team. The ideal candidate will have strong backend development expertise, a deep understanding of microservices, and a solid grasp of agile methodologies. This is a hands-on role focused on designing, developing, and maintaining scalable applications in a collaborative, fast-paced environment. Key Responsibilities Design, develop, test, and maintain scalable Java-based applications using Java 8 or higher and Spring Boot. Build RESTful APIs and microservices with clean, maintainable code. Work with SQL and NoSQL databases to manage data storage and retrieval effectively. Collaborate with cross-functional teams in an Agile/Scrum environment. Write unit and integration tests using JUnit, Mockito, and apply Test-Driven Development (TDD) practices. Manage source code with Git and build applications using Maven. Create and manage Docker containers for development and deployment. Troubleshoot and debug production issues in Unix/Linux environments. Participate in code reviews and ensure adherence to best practices. Must-Have Qualifications 5–8 years of hands-on experience with Java 8 or higher . Strong experience with Spring Boot and microservices architecture. Proficiency in Git , Maven , and Unix/Linux . Solid understanding of SQL and NoSQL databases. Experience working in Agile/Scrum teams. Hands-on experience with JUnit , Mockito , and TDD . Working knowledge of Docker and containerized deployments. Good to Have Experience with Apache Kafka for event-driven architecture. Familiarity with Ansible and/or Terraform for infrastructure automation. Knowledge of Docker Swarm or container orchestration tools. Exposure to Jenkins or other CI/CD tools. Proficiency in Bash scripting for automation and environment setup. Job Type: Full-time Pay: From ₹600,000.00 per year Benefits: Flexible schedule Health insurance Life insurance Provident Fund Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: Java: 5 years (Required) Work Location: In person
Posted 5 days ago
4.0 years
30 - 60 Lacs
Hyderabad, Telangana, India
On-site
The Opportunity for You We’re seeking a Full-Stack AI Engineer Key Responsibilities Design and develop full-stack applications that integrate LLMs and generative AI capabilities into our core product offerings Architect and implement scalable frontend solutions using React.js that provide intuitive interfaces for AI-powered features (low code platforms) Build robust backend services using Python and Node.js (Typescript) that handle complex AI model interactions and data processing Create and optimize APIs for efficient communication between frontend applications and AI services Interact with SQL and NoSQL data infrastructure components to ensure efficient data access, retrieval, and augmentation for AI applications Collaborate with ML engineers to integrate and optimize AI model deployments Implement and maintain CI/CD pipelines for both traditional and AI-enabled components (Kubernetes and GPU resources) Ensure high performance and reliability of AI-integrated features across the application stack Partner with product and business teams to understand requirements and deliver innovative solutions Qualifications Required Qualifications 4+ years of professional experience in full-stack development with React.js and Node.js Hands-on experience building applications that integrate with LLMs (like GPT models) or other generative AI solutions Strong understanding of REST APIs and microservices architecture Proficiency in modern Python, JavaScript/TypeScript, and associated tooling and ecosystem Experience with AWS cloud services, particularly in deploying and scaling AI/ML workloads Solid foundation in system design, data modeling, and RDBMS Understanding of AI/ML concepts, particularly in NLP and generative AI applications Experience with multi-tenant SaaS architectures and enterprise system integrations Preferred Qualifications Experience with AI model deployment and optimization in production environments Proficient with prompt engineering and LLM fine-tuning concepts Familiarity with vector databases and embedding-based applications Experience with real-time data processing using technologies like AWS Kinesis Background in FinTech or enterprise software development Demonstrated experience in championing the use of AI tools and practices to enhance development productivity and code quality MS in Computer Science or related field Technical Skills Frontend React.js/Next.js TypeScript Modern CSS frameworks State management (Redux, Context API) Component design and optimization Backend Node.js (Typescript) RESTful API design Database design (SQL and NoSQL) Message queues and event streaming Authentication and authorization AI/ML Integration LLM APIs and SDK integration Vector databases Prompt engineering Performance optimization for AI services DevOps AWS cloud services Azure AI services Docker/Kubernetes CI/CD pipelines Monitoring and logging Skills: typescript,generative ai,rest apis,design,aws,llm,node.js,ai,ml,python,aws cloud services,nosql,docker,microservices architecture,kubernetes,ci/cd pipelines,ci/cd,sql,nlp,react.js,ai/ml concepts
Posted 5 days ago
0 years
0 Lacs
Mahad, Maharashtra, India
On-site
Job Description Preparation and updation of Validation Master Plan (VMP) Preparation of Process validation, Computer system validation protocol and report. Preparation of product matrix and Cleaning Validation/verification protocol and Report. Preparation of qualification and requalification protocol and report for processing equipment/instrument, utilities, and facility. Preparation of area validation protocol and reports. Preparation and review of quality risk assessments. Review of calibration certificate (External/internal). Preparation, issuance, review & archival of BMR/BPR. Batch Record storage, retrieval & destruction. Preparation of APQR. Line Clearance for Manufacturing, Packing & Dispensing activity. Sampling of Bulk and Finished goods. Review of production records and Finished Goods verification. Online observation of process deviation and effective implementation of CAPA. Management of Change Control / CAPA / Incidences. Handling of Change Control and follow-up for the implementation of Changes. To coordinate & maintain change control, deviation records. Complaint handling. Preparation and updation of Q.A departmental SOPs and loading it in DCS (Document Control System) ENSUR 4.2 To give training as per the Training schedule. To coordinate training program of the company along with HR, including on Job Training. Documentation Management as per SOP. To provide necessary documents / data required by CQA and as per customer’s requirement. Conduct, monitor and review of compliance of Self Inspection Program. Audit compliance coordination, to compile CAPA and prepare response to Audit report in co-ordination with QA Head and technical team. Execution of requirements for food / Dietary supplements regulations for export market (US)-21 CFR Part 111 Ensuring avoidance of breach of data integrity in area. Implementation of effective sanitation programme in area. Adherence to the requirements of EHS norms. Execution of various initiatives as are suggested by corporate functions. To determine internal and external QEHS issues as well as needs and expectations of relevant interested parties and monitor the same. Qualifications B. Pharm or M. Pharm About Us In the three decades of its existence, Piramal Group has pursued a twin strategy of both organic and inorganic growth. Driven by its core values, Piramal Group steadfastly pursues inclusive growth, while adhering to ethical and values-driven practices. Equal employment opportunity Piramal Group is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, ethnicity, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetics, or other applicable legally protected characteristics. We base our employment decisions on merit considering qualifications, skills, performance, and achievements. We endeavor to ensure that all applicants and employees receive equal opportunity in personnel matters, including recruitment, selection, training, placement, promotion, demotion, compensation and benefits, transfers, terminations, and working conditions, including reasonable accommodation for qualified individuals with disabilities as well as individuals with needs related to their religious observance or practice. About The Team Piramal Pharma Solutions (PPS) is a Contract Development and Manufacturing Organization (CDMO) offering end-to-end development and manufacturing solutions across the drug life cycle. We serve our customers through a globally integrated network of facilities in North America, Europe, and Asia. This enables us to offer a comprehensive range of services including drug discovery solutions, process & pharmaceutical development services, clinical trial supplies, commercial supply of APIs, and finished dosage forms. We also offer specialized services such as the development and manufacture of highly potent APIs, antibody-drug conjugations, sterile fill/finish, peptide products & services, and potent solid oral drug products. PPS also offers development and manufacturing services for biologics including vaccines, gene therapies, and monoclonal antibodies, made possible through Piramal Pharma Limited’s investment in Yapan Bio Private Limited. Our track record as a trusted service provider with experience across varied technologies makes us a partner of choice for innovators and generic companies worldwide.
Posted 5 days ago
0 years
0 Lacs
India
Remote
We are seeking a highly skilled Back-End Developer with expertise in Salesforce Service Cloud Voice and Amazon Web Services (AWS) Connect to join our dynamic team. This role offers an opportunity to work on cutting-edge voice and contact center solutions , collaborating with front-end developers, architects, and stakeholders to build scalable, high-performance back-end systems. You will be responsible for designing server-side logic, developing APIs, optimizing databases, and ensuring system security . If you are passionate about cloud-based telephony solutions, enjoy working in a fast-paced, collaborative environment , and want to drive meaningful digital transformation , we invite you to apply! Job Role : Salesforce Developer Job Location : India (Remote) Key Responsibilities: Architect, develop, and optimize back-end systems for Salesforce Service Cloud Voice and AWS Connect . Design, build, and maintain APIs, microservices, and server-side logic to integrate Service Cloud Voice with AWS services and other enterprise applications. Develop and maintain databases , ensuring efficient data storage, retrieval, and performance optimization. Implement security best practices , including authentication, authorization, and encryption, to safeguard sensitive data. Troubleshoot and debug complex back-end issues related to telephony, call routing, real-time analytics, and integrations . Work closely with front-end developers to integrate UI components with back-end services , ensuring seamless user experiences. Optimize system performance and scalability , leveraging AWS services such as Lambda, S3, DynamoDB, SQS, and API Gateway . Collaborate with DevOps teams to manage CI/CD pipelines, deployment automation, and cloud infrastructure provisioning. Stay updated on the latest Salesforce and AWS technologies , best practices, and industry trends. Document technical specifications, workflows, and system architecture to ensure knowledge sharing and scalability. Required Qualifications & Skills: Proven experience as a Back-End Developer , working with Salesforce Service Cloud Voice and AWS Connect . Strong programming skills in Node.js, Java, Python, or Apex for developing scalable back-end services. Experience with Salesforce API integrations , including REST, SOAP, and GraphQL APIs . Deep understanding of AWS services such as Lambda, API Gateway, DynamoDB, S3, SQS, SNS, IAM, and CloudFormation . Strong knowledge of database design, query optimization, and performance tuning with SQL and NoSQL databases . Familiarity with DevOps tools (CI/CD pipelines, Docker, Kubernetes, Terraform, Jenkins, Git). Experience with real-time data processing, WebSockets, or event-driven architectures . Excellent problem-solving and debugging skills in distributed cloud environments. Knowledge of OAuth, JWT, and security best practices for authentication and authorization. Ability to write clean, modular, and maintainable code following software engineering best practices. Strong communication skills , ability to work in cross-functional teams, and fluency in English. Preferred Qualifications (Nice-to-Have): Salesforce Platform Developer I & II or Service Cloud Consultant Certification . Experience with AWS AI/ML services for speech analytics and sentiment analysis . Familiarity with Amazon Lex and Amazon Polly for advanced voice automation. Knowledge of Terraform or CloudFormation for managing cloud infrastructure as code. Why Join OSF Digital? At OSF Digital , we are committed to creating an inclusive, diverse, and supportive work environment where everyone can thrive. We believe in the power of innovation, teamwork, and continuous learning to drive digital transformation for businesses worldwide. As an equal opportunity employer , we welcome and celebrate diverse perspectives, backgrounds, and experiences. We do not discriminate based on gender identity, race, ethnicity, disability, sexual orientation, age, religion, national origin, or any other protected category. Join us and be part of a global team that shapes the future of digital solutions! Apply today and take your career to the next level!
Posted 5 days ago
0 years
0 Lacs
India
On-site
About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing Multi-Agent & Agentic RAG workflows in production. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help build an embedded AI CoPilot across the different products at NetSkope What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Drive the end-to-end development and deployment of CoPilot, an embedded assistant, powered by cutting-edge Multi-Agent Workflows. This will involve designing and implementing complex interactions between various AI agents & tools to deliver seamless, context-aware assistance across our product suite Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps & LLMOps best practices to deploy and monitor machine learning models & agentic workflows in production. Implement comprehensive evaluation and observability strategies for the CoPilot Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Collaborate with cloud architects and security analysts to develop cloud-native security solutions x platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Has built & deployed a multi-agent or agentic RAG workflow in production. Expertise in prompt engineering patterns such as chain of thought, ReAct, zero/few shot. Experience in Langgraph/Autogen/ AWS Bedrock/ Pydantic AI/ Crew AI Strong understanding of MLOps practices and tools (e.g., Sagemaker/MLflow/ Kubeflow/ Airflow/ Dagster). Experience with evaluation & observability tools like Langfuse/ Arize Phoenix/ Langsmith. Data Engineering Proficiency in working with vector databases such as PGVector, Pinecone, and Weaviate. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Software Engineering Expertise in Python with experience in one other language (C++/Java/Go) for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Experience of building & consuming MCP clients & servers. Experience with asynchronous programming, including web-sockets, FastAPI, and Sanic. Good-to-Have Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like Pytorch, TensorFlow and Scikit-learn. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Graph database knowledge is a plus. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.
Posted 5 days ago
0.0 - 2.0 years
3 - 8 Lacs
Mohali, Punjab
On-site
The Role- As an AI Engineer , you will be responsible for building and optimizing AI-first solutions that power BotPenguin’s conversational and Agentic capabilities. You will work on LLM integrations, NLP pipelines, and machine learning models, while collaborating with cross-functional teams to deliver intelligent experiences at scale. This is a high-impact role that combines engineering, research, and deployment skills to solve real-world problems using artificial intelligence. What you need for this role- Education: Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related discipline. Experience: 2–5 years of experience working in AI/ML or related software engineering roles. Technical Skills: Strong proficiency in Python and libraries such as scikit-learn, PyTorch, TensorFlow, Transformers (Hugging Face). Hands-on experience with LLMs (OpenAI, Claude, LLaMA) and building AI agents using API integrations. Experience working with NLP tasks (intent classification, text generation, embeddings, summarization). Familiarity with Vector Databases like Pinecone, FAISS, Elastic Vector DB. Understanding of Prompt Engineering, RAG (Retrieval-Augmented Generation), and embedding generation. Proficiency in building and deploying ML models via Docker/Kubernetes or cloud services like AWS/GCP. Experience with version control systems (GitLab/GitHub) and working in Agile teams. Soft Skills: Strong analytical thinking and problem-solving capabilities. Passion for research, innovation, and applying AI to real-world use-cases. Excellent communication skills and the ability to collaborate across departments. Attention to detail with a focus on model accuracy, explainability, and performance. What you will be doing- Design, build, and optimize AI-powered chatbot features and virtual agents using state-of-the-art models. Collaborate with the Product, Backend, and UI teams to integrate intelligent workflows into the BotPenguin platform. Build, evaluate, and fine-tune language models and NLP components tailored to user use-cases. Implement context-aware chat solutions using embeddings, vector stores, and retrieval mechanisms. Create internal tools for prompt testing, versioning, and debugging AI responses. Monitor model performance metrics such as latency, hallucination rate, and user satisfaction. Explore research papers, open-source innovations, and contribute to rapid experimentation. Write clean, modular, and testable code along with clear documentation for future scalability. Any other development related tasks as required for BotPenguin. Guiding, reviewing the code written by junior members in the team. Top reasons to work with us- Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. Flexible work hours and an emphasis on work-life balance. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹300,000.00 - ₹800,000.00 per year Benefits: Flexible schedule Health insurance Provident Fund Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: AI: 2 years (Required) Work Location: In person
Posted 5 days ago
6.0 years
0 Lacs
India
Remote
Ekyam.ai: Integrating Systems, Unleashing Intelligence - Join Our India Expansion! Are you ready to solve complex integration challenges and build the next generation of AI-driven retail technology? Ekyam.ai, headquartered in New York, US, is expanding globally and establishing its new team in India! We are looking for talented individuals like you to be foundational members of our Indian presence. Ekyam.ai is developing a groundbreaking AI-native middleware platform that connects disparate retail systems (ERP, OMS, WMS, POS, etc.) and creates a unified, real-time, vectorized data layer. We enable intelligent automation and transform how retailers leverage their data by integrating cutting-edge AI capabilities. Role We are seeking an experienced AI Developer (4–6 years) skilled in applying Large Language Models (LLMs) and building AI-driven applications to join our growing team. A significant part of this role involves designing and developing AI Agents within our platform with an initial focus on integrating external LLM APIs (e.g., OpenAI, Anthropic, Google) via sophisticated prompt engineering and RAG techniques into these agents, built using Python + FastAPI . You will architect the logic for these agents, enabling them to perform complex tasks within our e-commerce and retail data orchestration pipelines. Furthermore, as Ekyam.ai evolves, this role offers the potential to grow into customizing and deploying LLMs in-house , so adaptability and a strong foundation in ML/LLM principles are key. Key Responsibilities AI Agent Development: Design, develop, test, and maintain the core logic for AI Agents within FastAPI services. Orchestrate agent tasks, manage state, interact with platform data/workflows, and integrate LLM capabilities. LLM API Integration & Prompt Engineering: Integrate with external LLM provider APIs . Design, implement, and rigorously test effective prompts for diverse retail-specific tasks (generation, Q&A, summarization). RAG Implementation: Implement and optimize Retrieval-Augmented Generation (RAG) patterns using vector databases to provide relevant context to LLM API calls made by agents. FastAPI Microservice Development: Build and maintain the scalable FastAPI microservices that host AI Agent logic and handle interactions with LLMs and other platform components in a containerized environment ( Docker, Kubernetes ). Data Processing for AI: Prepare and preprocess data required for effective prompt context, RAG retrieval, and potentially for future fine-tuning tasks. Collaboration & Future Adaptation: Work with cross-functional teams to deliver AI features. Stay updated on LLM advancements and be prepared to learn and contribute to potential future in-house LLM fine-tuning and deployment efforts. Required Skills & Qualifications 3–6 years of hands-on experience in software development with a strong focus on AI/ML application development . Demonstrable experience integrating and utilizing external LLM APIs (e.g., OpenAI, Anthropic, Google) in applications. Proven experience with Prompt Engineering techniques. Strong Python programming skills. Practical experience building and deploying RESTful APIs using FastAPI . Experience designing and implementing application logic for AI-driven features or agents . Understanding and practical experience with RAG concepts and vector databases (Pinecone, FAISS, etc.). Solid understanding of core Machine Learning concepts and familiarity with frameworks like PyTorch, TensorFlow, or Hugging Face (important for understanding models and future adaptation). Familiarity with cloud platforms ( AWS, GCP, or Azure ) and containerization ( Docker, Kubernetes ) for application deployment. Solid problem-solving skills and clear communication abilities. Experience working effectively in an agile environment. Willingness and capacity to learn and adapt towards future work involving deeper LLM customization and deployment. Bachelor's or Master's degree in Computer Science, AI, or a related field. Ability to work independently and collaborate effectively in a remote setting. Preferred Qualifications Experience with frameworks like LangChain or LlamaIndex. Experience with observability and debugging tools for LLM applications, such as LangSmith. Experience with graph databases (e.g., Neo4j) and query languages (e.g., Cypher). Experience with MLOps practices, applicable to both current application monitoring and future model lifecycle management. Experience optimizing API call performance (latency/cost) or model inference. Knowledge of AI security considerations and bias mitigation . Why Join Ekyam.ai? Be a foundational member of our new India team! This role offers a unique blend: build intelligent AI Agents leveraging cutting-edge external LLMs today, while positioning yourself at the forefront of our future plans for deeper AI customization. You'll gain expertise across the AI application stack (APIs, RAG, Agents, potential future MLOps) and collaborate within a vibrant global team shaping the future of AI in e-commerce. We offer competitive compensation that values your current skills and growth potential.
Posted 5 days ago
0.0 - 3.0 years
8 - 20 Lacs
Hyderabad, Telangana
On-site
Location: Hyderabad, India Company: Radise India Type: Full-Time | On-site Experience: 2–5 years (Mid-Level); open to strong junior candidates with relevant project experience About the Role: RADISE India is a growing Civil Engineering firm in Asia Pacific (APAC) Market using innovative IoT-based sensor technology aimed at providing infrastructure owners, builders, and operators the vital information on structural behavior to help them reduce the uncertainties associated with material properties and structural capacity. RADISE India, provides a wide range of innovative engineering consulting services for infrastructure owners, builders, contractors, vendors, and operators. Our technology driven products and services are for infrastructure builders, owners & operators, government bodies, general contracting firms. Key Responsibilities: Evaluate and deploy open-source LLMs (Gemma, DeepSeek, LLaMA 3, Mistral, etc.) for use in document Q&A, natural language to SQL translation, and AI-driven insights. Implement document ingestion pipelines (PDF, Word, Excel, images) using tools like LangChain, Haystack, or LlamaIndex for chunking, embedding, and vector search. Set up and manage vector databases (e.g., pgvector). Develop RAG pipelines to connect embeddings from Azure Blob Storage files and PostgreSQL data. Build and test AI agents capable of routing user queries to either blob-based document retrieval or database querying (SQL generation). Engineer and Evaluate prompts by implementing industry standard prompt engineering techniques. Work closely with our backend team (TypeScript, Express, PostgreSQL) to expose AI capabilities as services or APIs. Benchmark cost, performance, and latency of different models and infrastructures. Support future transition to secure/compliant deployment models (FedRAMP, NIST 800-171). Collaborate with the product team to refine chat UX and identify high-impact use cases. Required Skills and Qualifications: Bachelor's or Master’s in Computer Science, AI/ML, Data Science, or related field. 2+ years of hands-on experience working with LLMs , embeddings, or NLP pipelines. Solid understanding of open-source AI stacks (Transformers, LangChain, LlamaIndex, etc.). Familiarity with vector databases and semantic search principles. Experience working with Python and optionally some TypeScript/Node.js . Comfortable working with SQL and relational databases like PostgreSQL. Strong understanding of AI infrastructure (Docker, Linux, GPU/CPU optimization, cloud deployment – Azure preferred). Must have Strong Prompt Engineering skills. Ability to read and implement research papers or GitHub repos for LLM-based applications. Preferred (Nice-to-Have): Experience deploying or fine-tuning OpenAI, LLaMA, Mistral, Falcon , or similar open-source models. Familiarity with Azure OpenAI, Azure Cognitive Search, or OpenAI APIs for benchmarking. Experience with OCR tools and document parsing pipelines. Exposure to civil engineering or construction domain data is a plus. Why Join Us? Work on cutting-edge AI applications for infrastructure and civil engineering – real-world impact. Direct mentorship from leadership (with deep engineering expertise) and cross-functional collaboration. Opportunity to experiment with and deploy models at scale across enterprise SaaS. Growth potential to lead AI initiatives across multiple product lines (SmartInfra Hub, SmartCompose.AI, etc.). Job Type: Full-time Pay: ₹800,000.00 - ₹2,000,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Experience: AI Engineer: 3 years (Preferred) Location: Hyderabad, Telangana (Required) Work Location: In person
Posted 5 days ago
0.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Role Overview Join our dynamic team as a Backend + DevOps Engineer. You'll architect and scale document processing pipelines that handle thousands of financial documents daily, ensuring high availability and cost efficiency. What You'll Do Build scalable async processing pipelines for document classification, extraction, and validation Optimize cloud infrastructure costs while maintaining 99.9% uptime for document processing workflows Design and implement APIs for document upload, processing status, and results retrieval Manage Kubernetes deployments with autoscaling based on document processing load Implement monitoring and observability for complex multistage document workflows Optimize database performance for high-volume document metadata and processing results Build CI/CD pipelines for safe deployment of processing algorithms and business rules Technical Requirements Have: 5+ years backend development (Python or Go) Strong experience with async processing (Celery, Temporal, or similar) Docker containerization and orchestration Cloud platforms (AWS/GCP/Azure) with cost optimization experience API design and development (REST/GraphQL) Database optimization (MongoDB, PostgreSQL) Production monitoring and debugging Nice to Have: Kubernetes experience Experience with document processing or ML pipelines Infrastructure as Code (Terraform/CloudFormation) Message queues (SQS, RabbitMQ, Kafka) Performance optimization for high-throughput systems Job Type: Full-time Experience: Python: 5 years (Required) DevOps: 5 years (Preferred) Location: Bangalore, Karnataka (Required) Work Location: In person Speak with the employer +91 9258692828
Posted 5 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree or equivalent practical experience. 8 years of experience in software development, and with data structures/algorithms. 5 years of experience testing, and launching software products, and 3 years of experience with software design and architecture. Experience working in the networking domain. Preferred qualifications: Master’s degree or PhD in Engineering, Computer Science, or a related technical field. 3 years of experience in a technical leadership role leading project teams and setting technical direction. 3 years of experience working in an organization involving cross-functional, or cross-business projects. Experience in Networking specifically Campus Networks, Data center and WAN networking. About the job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. With your technical expertise you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. Google's global network(GGN) infrastructure is already one of the largest in the world and it is growing very rapidly. GGN (Google Global Network)’s mission is to “ Enable the world’s imagination with dependable, universal connectivity”. Our team works on innovative and technology in building, measuring, reporting and managing health of the Google's worldwide network that includes the customer facing B2 network as well as Google’s private Wide Area Network (WAN) B4. We design distributed software systems to solve the complex network problems related to Install/Turn-Up/Activation/Capacity-expansion/Capacity-reduction/Decommissioning of different elements of the Google network.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Write and test product or system development code. Participate in, or lead design reviews with peers and stakeholders to decide on available technologies. Review code developed by other developers and provide feedback to ensure best practices (e.g., style, guidelines, checking code in, accuracy, testability, and efficiency). Contribute to existing documentation or educational content and adapt content based on product/program updates and user feedback. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on hardware, network, or service operations and quality. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Posted 5 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Location: Bengaluru (Hybrid) Role Summary We’re seeking a skilled Data Scientist with deep expertise in recommender systems to design and deploy scalable personalization solutions. This role blends research, experimentation, and production-level implementation, with a focus on content-based and multi-modal recommendations using deep learning and cloud-native tools. Responsibilities Research, prototype, and implement recommendation models: two-tower, multi-tower, cross-encoder architectures Utilize text/image embeddings (CLIP, ViT, BERT) for content-based retrieval and matching Conduct semantic similarity analysis and deploy vector-based retrieval systems (FAISS, Qdrant, ScaNN) Perform large-scale data prep and feature engineering with Spark/PySpark and Dataproc Build ML pipelines using Vertex AI, Kubeflow, and orchestration on GKE Evaluate models using recommender metrics (nDCG, Recall@K, HitRate, MAP) and offline frameworks Drive model performance through A/B testing and real-time serving via Cloud Run or Vertex AI Address cold-start challenges with metadata and multi-modal input Collaborate with engineering for CI/CD, monitoring, and embedding lifecycle management Stay current with trends in LLM-powered ranking, hybrid retrieval, and personalization Required Skills Python proficiency with pandas, polars, numpy, scikit-learn, TensorFlow, PyTorch, transformers Hands-on experience with deep learning frameworks for recommender systems Solid grounding in embedding retrieval strategies and approximate nearest neighbor search GCP-native workflows: Vertex AI, Dataproc, Dataflow, Pub/Sub, Cloud Functions, Cloud Run Strong foundation in semantic search, user modeling, and personalization techniques Familiarity with MLOps best practices—CI/CD, infrastructure automation, monitoring Experience deploying models in production using containerized environments and Kubernetes Nice to Have Ranking models knowledge: DLRM, XGBoost, LightGBM Multi-modal retrieval experience (text + image + tabular features) Exposure to LLM-powered personalization or hybrid recommendation systems Understanding of real-time model updates and streaming ingestion
Posted 5 days ago
4.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description: As a Data Engineer, you will be responsible for designing, implementing, and maintaining our data infrastructure to support our rapidly growing business needs. The ideal candidate will have expertise in Apache Iceberg, Apache Hive, Apache Hadoop, SparkSQL, YARN, HDFS, MySQL, Data Modeling, Data Warehousing, Spark Architecture, and SQL Query Optimization. Experience with Apache Flink, PySpark, Automated Data Quality testing & Data Migration is considered a plus. Also, it's mandatory to know any one cloud stack (AWS or Azure) for Data Engineering to Create Data Jobs and Workflows and Scheduler it later for Automation Job Responsibilities & Requirements : Bachelor's degree in computer science, Information Technology, or a related field. Master's degree preferred. 4-5 years of experience working as a Data Engineer Mandatory experience in PySpark Development for Big data processing Strong proficiency in Apache Iceberg, Apache Hive, Apache Hadoop, SparkSQL, YARN, HDFS, Data Modeling, and Data Warehousing. Core PySpark Development and Optimizing SQL queries and performance tuning to ensure optimal data retrieval and processing. Experience with Apache Flink, and Automated Data Quality testing is a plus. It's mandatory to know any one cloud stack (AWS or Azure) for Data Engineering to Create Data Jobs and Workflows and Scheduler later for Automation Join Xiaomi India Technology and be part of a team that is shaping the future of technology innovation. Apply now and embark on an exciting journey with us!
Posted 5 days ago
0 years
0 Lacs
India
Remote
Join theprintspace — the world’s leading fine art printing company — as our new HR Manager . We support artists and photographers globally through cutting-edge printing, framing, and dropshipping services, with 65+ team members across 4 countries. In this role, you'll take ownership of HR processes across the full employee lifecycle, ensure compliance in the UK, Germany, US, and India, and help shape a people-first culture in a dynamic, creative environment. Key Responsibilities : 1. Employee Lifecycle Management : Manage the onboarding and offboarding processes, ensuring smooth transitions for all employees across global branches. Oversee holiday and sick leave monitoring, ensuring accurate record-keeping. Conduct exit interviews and provide actionable feedback to management to improve employee experience. 2.Payroll, Benefits & Compliance : Collaborate with management to ensure accurate and timely processing of payroll, bonuses, overtime, and sales commissions. Ensure tax details are updated and correct across all regions, liaising with payroll providers to ensure compliance with local regulations. Maintain and update commission structures and statements, providing clear communication to staff. Ensure compliance with employment laws and practices in all operational regions (UK, Germany, US, India). 3.Training & Development : Assist operational managers in the creation and maintenance of training materials, ensuring they are accessible in the correct formats. Coordinate with operational managers to ensure that all staff receive the necessary training for their roles. Implement and manage a structured annual review and probationary review process. 4.Recruitment & Freelance Resource Management : Liaise with operational managers who have identified hiring needs to help create job descriptions, and manage job postings and candidate outreach. Vet candidates, organise first-round interviews, and manage the recruitment process from start to finish. Help identify and onboard freelance resources as required 5.Employee Relations & Performance : Be the main point of contact for employee issues, providing support and guidance as needed. Manage annual and probationary reviews, ensuring that processes are structured, and feedback is constructive. Create and maintain organisational charts, including job titles and salary levels for all positions. 6.Administrative & Operational Support : Maintain an up-to-date register of company equipment provided to staff, ensuring proper tracking and retrieval when employees leave. Help bring structure to company-wide processes, such as performance reviews, training, and employee development programs. Qualifications : Proven experience in HR management, preferably within a global or multi-branch company. Strong understanding of employment laws and payroll practices in the UK, Germany, US, and India. Excellent communication and interpersonal skills. Ability to handle sensitive information with discretion and maintain confidentiality. Strong organisational skills with attention to detail. Experience in recruitment, employee relations, and performance management. Ability to work independently and remotely, managing multiple responsibilities across different time zones.
Posted 5 days ago
5.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Python Developer Experience Level: 5-7 Years Location : Hyderabad Job Description We are seeking an experienced Lead Python Developer with a proven track record of building scalable and secure applications, specifically in the travel and tourism industry. The ideal candidate should possess in-depth knowledge of Python, modern development frameworks, and expertise in integrating third-party travel APIs. This role demands a leader who can foster innovation while adhering to industry standards for security, scalability, and performance. Roles and Responsibilities Application Development: Architect and develop robust, high-performance applications using Python frameworks such as Django, Flask, and FastAPI. API Integration: Design and implement seamless integration with third-party APIs, including GDS, CRS, OTA, and airline-specific APIs, to enable real-time data retrieval for booking, pricing, and availability. Data Management: Develop and optimize complex data pipelines to manage structured and unstructured data, utilizing ETL processes, data lakes, and distributed storage solutions. Microservices Architecture: Build modular applications using microservices principles to ensure scalability, independent deployment, and high availability. Performance Optimization: Enhance application performance through efficient resource management, load balancing, and faster query handling to deliver an exceptional user experience. Security and Compliance: Implement secure coding practices, manage data encryption, and ensure compliance with industry standards such as PCI DSS and GDPR. Automation and Deployment: Leverage CI/CD pipelines, containerization, and orchestration tools to automate testing, deployment, and monitoring processes. Collaboration: Work closely with front-end developers, product managers, and stakeholders to deliver high- quality, user-centric solutions aligned with business goals.Requirements Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Technical Expertise: o At least 4 years of hands-on experience with Python frameworks like Django, Flask, and FastAPI. o Proficiency in RESTful APIs, GraphQL, and asynchronous programming. o Strong knowledge of SQL/No SQL databases (PostgreSQL, MongoDB) and big data tools (e.g., Spark, Kafka). o Experience with cloud platforms (AWS, Azure, Google Cloud), containerization (Docker, Kubernetes), and CI/CD tools (e.g., Jenkins, GitLab CI). o Familiarity with testing tools such as PyTest, Selenium, and SonarQube. o Expertise in travel APIs, booking flows, and payment gateway integrations. Soft Skills: o Excellent problem-solving and analytical abilities. o Strong communication, presentation, and teamwork skills. o A proactive attitude with a willingness to take ownership and perform under pressure.
Posted 5 days ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
About the Role We are looking for a motivated entry-level engineer to join our team and help build machine learning and Large Language Model (LLM)-based applications. This is a fully on-site position, and ideal for someone local who wants to kick-start their career in AI, learn fast, and contribute to real-world projects from day one. This role is challenging and fast-paced, requiring someone who can absorb knowledge quickly, adapt to complex requirements, and execute at high speed. You'll be expected to prototype ideas within a few days to test feasibility, and contribute meaningfully to ML pipelines and LLM-enabled features under the guidance of experienced engineers. If you are a recent graduate driven by curiosity, action-oriented thinking, and a strong work ethic, this is a rare opportunity to accelerate your growth, work hands-on with cutting-edge AI technologies, and make a real impact from day one. What You Will Do Build and test workflows and Agentic AI Applications. Integrate open-source LLMs such as LLaMA2 and commercial APIs such as OpenAI, into backend services using LangChain and other frameworks Develop retrieval-augmented generation (RAG) pipelines, including embedding generation, vector store setup and query orchestration Create and maintain prompt templates and toolchains for workflows like multi-turn conversations or auto-tagging of enterprise documents Write unit tests and debug issues in LLM-powered features such as semantic search and summarization Prototype new ideas rapidly in 1-2 days and iterate with feedback Document your work clearly and maintainably Required Skills Strong Python fundamentals Prior experience shipping code to production Hands-on experience in building applications with LLMs like GPT, Claude, and LLaMA Understanding of embeddings, tokenization, and vector search Exposure to LangChain, LlamaIndex, or similar frameworks Comfortable with APIs, JSON, and CLI tools Ability to build middleware in Node.js and connect to Python backend Ability to work fast, prototype quickly, and accept feedback Self-driven and persistent problem-solver What We Offer A rare, hands-on opportunity to learn real-world AI/LLM development. Daily exposure to LLM models and tools.Direct mentorship from experienced engineers. Fast-paced, execution-focused culture. Performance-based growth for driven candidates. Modest initial compensation with high learning ROI. How to Apply Send your resume and a brief note explaining your interest in AI/ML and why this opportunity excites you to careers@bitwiseacademy.com. If you have worked on any LLM projects - GitHub links, demos, or code samples are strongly encouraged. As part of the interview process, you will be asked to implement a small prototype to demonstrate your ability to learn quickly, work independently, and apply AI/LLM concepts in code.
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Sundial Sundial is a top VC-backed early-stage startup headquartered at San Francisco Bay Area, US with a second office in Bengaluru, India. We have raised $23M to build the analytics platform for the AI era. Our founders are industry veterans Chandra Narayanan, previously Chief Data Scientist at Sequoia Capital, and Julie Zhuo, previously VP, Design and Research at Facebook, author of a bestselling management book. We are a small team of top talent, high caliber Engineers, Data Scientists, Designers and PM (currently 49 in India, 5 in US) and rapidly growing. We are on a mission to help builders make meaningful use of data to fulfill their vision. Sundial automatically diagnoses a product's data to explain the "what" and the "why" to enable faster and better decision making. Now Pull up a chair! We're excited to tell you more about our vision for Sundial! By now, you've probably visited our website and maybe our LinkedIn page and browsed around the profiles of our team members. If you haven't yet, please take a moment to do so. We'll wait 🙂 Okay, now you know at a high level that we are focused on data storytelling. The data space has over $100B in market opportunity ahead of it, and modern Business Intelligence tools are growing at over 15% year over year.We've seen this evolution firsthand. Our co-founders Chandra Narayanan and Julie Zhuo cut their teeth scaling Facebook from a few million college students to billions of people. To make the best decisions possible, companies are investing more and more into understanding their data. And yet, demand far outpaces supply for Data Scientists and Data Platform Engineers who can construct useful narratives out of the growing firehose of raw data, tables and charts. Currently most data-centric organizations have a large Data Platform and Data Science team that builds Big Data Platforms, Insights Data stores to bring data into dashboards and manually generates reports to communicate the product story broadly. But a large part of this process can be easily productised. At Sundial, we're building a Sundial’s Insights Data Platform. This platform converts raw data in large Data Warehouses into a universe of deep product insights that product teams—including PMs, data scientists, executives and engineers—consume easily. This involves Highly Scalable, Robust Distributed Data Platform which can consistently, repeatably run complex Data Science and Transformation algorithms at Cloud Scale. We envision a future where every organization becomes a data-informed organization through our work of: Productising the diagnostic analysis of yesterday so teams can focus on the strategic bets for tomorrow. Making data understanding easy and accessible to everyone, not just data scientists. Surfacing opportunities of improvement in growth across segments We believe better usage of data leads to better products, and better products lead to better experiences for people. Responsibilities: Developing and enhancing the framework for Agents, Prompting, and Retrieval-Augmented Generation (RAG) to build generative AI agents, multi-agent frameworks and tools for our product. Integrating and optimising machine learning models into our Sundial Insights Platform. Building out the infrastructure for using AI Platforms or hosting and deploying tuned AI models, ensuring high availability and scalability. Collaborating with data scientists and other engineers to understand data gaps and improve our Generative AI use-cases Designing and implementing efficient data pipelines to support ML operations and data processing needs. Working with cutting-edge AI technologies and contributing to the development of new AI-driven features and capabilities. Designing and building systems and features from scratch at a rapid pace and high quality. Requirements: Strong Software Engineering fundamentals, Computer Science fundamentals, ML Engineering fundamentals, coding, and design capabilities. Experience in ML Engineering, ML Ops, and backend engineering for hosting and building tools using Public AI Platforms or open source ML models. Familiarity with generative AI techniques, frameworks for Agents/Prompting/RAG, and model deployment. Familiarity with libraries like Langgraph, Langchain, Llamaindex, Autogen etc. Experience with data engineering as part of ML engineering, including building and managing data pipelines. Familiarity with cloud services such as AWS, GCP, or Azure and tools for ML Ops. Proficiency in programming languages such as Python, Golang, SQL, and experience with ML frameworks like TensorFlow, PyTorch, etc. A Bachelor’s or Master’s in Computer Science or a related field, or equivalent work experience. Should have at least 3 years of experience working on some of the above areas. Prior experience working in a fast-paced startup environment. You will probably like working with us if: You like the ownership, camaraderie and chaos of a start-up environment - Start-ups are not right for everyone. Things move quickly and change frequently. Start-ups haven't "made it" yet. We have to convince customers we are valuable enough to them. We must be scrappy and flexible. Everyone will wear lots of hats. But: if you have future aspirations of being an entrepreneur or leader, you'll find few better learning grounds. You'll learn by doing. You'll be given a ton of trust and responsibility. You'll see very transparently how we operate and make decisions. Your work will absolutely matter to the success of our company. You value learning and have a growth mindset - Sundial is founded on the idea that slope is far more important than intercept. We are a learning environment, and all of us have something to teach and learn from each other. We invest heavily in learning sessions, sharing insights, and reflecting on our growth. You're interested in understanding how companies grow, and how data plays a role - Unlocking the secrets of data is our bread and butter. How do successful companies grow? How do different types of businesses create value for users in an economically scalable way? If you find this area to be as fascinating as we do, that's awesome, because you're going to become an expert in this domain. :) Benefits: 🤑 Competitive salary & options package—A rewarding compensation structure that includes competitive pay and equity, ensuring your contributions are valued. 🌏 Global culture – Collaborate with diverse teams across San Francisco and Bangalore, gaining exposure to international perspectives. 🏝️ Unlimited vacation days—A trust-based policy encouraging you to recharge and return at your best. 🍽️ Food in the Office – Enjoy daily lunch at the office, with the freedom to choose what you want to eat. Interested? We’d love to hear from you—apply now!
Posted 5 days ago
2.0 years
0 Lacs
Pendurthi, Andhra Pradesh, India
On-site
This job is with Pfizer, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Use Your Power for Purpose A career with us is about discovering breakthroughs that change patients' lives. You will be part of bringing those therapies to people all over the world, driving the industry forward, and making a positive difference. Whatever your role, you will discover that amazing things are possible. As a member of the Global Supply division, you will have a direct impact on improving patients' lives while working at Pfizer. Your dedication and commitment will be instrumental in helping Pfizer achieve new milestones and make a significant impact on patients worldwide. What You Will Achieve In this role, you will: Represent your organizational unit on administrative matters, recommending, interpreting, and implementing internal policies and procedures. Perform a variety of administrative tasks across different functional areas to enhance business efficiency. Support services such as event planning, customer service, publications, and technical writing/editing. Actively participate in team process improvements and collaborate by sharing experiences. Manage your time and professional growth, taking accountability for results and prioritizing workflows. Utilize skills and knowledge to complete tasks, understand their relation to other processes, and participate in process improvement teams. Execute digital campaigns promptly, following content plans developed with Marketing and Medical teams, and support special projects and new digital promotion models. Assist marketing teams during campaign execution, monitoring, optimizing, and managing reports, focusing on user experience. Maintain local documentation as required by legislation, including archiving, tracking, and retrieval, and coordinate digital platform management with regional or local support. Provide analytical insights to support functional decisions, monitor performance through KPIs, and ensure compliance with norms, policies, and procedures. Here Is What You Need (Minimum Requirements) High School Diploma or GED with at least 2 years of experience. Experience in Marketing, digital marketing, commercial Strong interpersonal skills Keen eye for detail Ability to manage time and prioritize tasks effectively Experience with administrative tasks and process improvement Ability to work under moderate supervision and follow established procedures Bonus Points If You Have (Preferred Requirements) Ability to solve routine problems and convey issues constructively Understanding of both pharma industry and scientific academic research environments Knowledge of commercial or business analytics processes Ability to make basic decisions with an understanding of the consequences Ability to work collaboratively in a team environment Proficiency in using digital platforms and tools Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Support Services
Posted 5 days ago
2.0 years
0 Lacs
Pendurthi, Andhra Pradesh, India
On-site
This job is with Pfizer, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Use Your Power for Purpose A career with us is about discovering breakthroughs that change patients' lives. You will be part of bringing those therapies to people all over the world, driving the industry forward, and making a positive difference. Whatever your role, you will discover that amazing things are possible. As a member of the Global Supply division, you will have a direct impact on improving patients' lives while working at Pfizer. Your dedication and commitment will be instrumental in helping Pfizer achieve new milestones and make a significant impact on patients worldwide. What You Will Achieve In this role, you will: Represent your organizational unit on administrative matters, recommending, interpreting, and implementing internal policies and procedures. Perform a variety of administrative tasks across different functional areas to enhance business efficiency. Support services such as event planning, customer service, publications, and technical writing/editing. Actively participate in team process improvements and collaborate by sharing experiences. Manage your time and professional growth, taking accountability for results and prioritizing workflows. Utilize skills and knowledge to complete tasks, understand their relation to other processes, and participate in process improvement teams. Execute digital campaigns promptly, following content plans developed with Marketing and Medical teams, and support special projects and new digital promotion models. Assist marketing teams during campaign execution, monitoring, optimizing, and managing reports, focusing on user experience. Maintain local documentation as required by legislation, including archiving, tracking, and retrieval, and coordinate digital platform management with regional or local support. Provide analytical insights to support functional decisions, monitor performance through KPIs, and ensure compliance with norms, policies, and procedures. Here Is What You Need (Minimum Requirements) High School Diploma or GED with at least 2 years of experience. Experience in Marketing, digital marketing, commercial Strong interpersonal skills Keen eye for detail Ability to manage time and prioritize tasks effectively Experience with administrative tasks and process improvement Ability to work under moderate supervision and follow established procedures Bonus Points If You Have (Preferred Requirements) Ability to solve routine problems and convey issues constructively Understanding of both pharma industry and scientific academic research environments Knowledge of commercial or business analytics processes Ability to make basic decisions with an understanding of the consequences Ability to work collaboratively in a team environment Proficiency in using digital platforms and tools Work Location Assignment: On Premise Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Support Services
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France