Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
India
On-site
This role is for one of the Weekday's clients Min Experience: 5 years JobType: full-time We are looking for a skilled and motivated Full Stack Engineer with strong expertise in ReactJS for front-end development and Django for back-end services. The ideal candidate should also possess hands-on experience with AWS cloud services and CI/CD workflows. This role involves designing scalable REST APIs to integrate deep learning inference scripts and deploying applications in a cloud-native environment. Requirements Key Responsibilities Develop and maintain user-centric, responsive web applications using ReactJS. Build and optimize scalable REST APIs using Django and Django Rest Framework (DRF). Integrate Python-based deep learning model inference scripts into back-end services. Manage and deploy full-stack applications using AWS services such as EC2, S3, Lambda, and RDS. Optimize application performance and ensure cross-platform responsiveness. Write clean, modular, and maintainable code while adhering to industry best practices. Implement data protection and application-level security protocols. Conduct debugging, testing, and performance profiling to ensure application quality. Use Git for version control across the development lifecycle. Requirements 3-5 years of professional experience in full-stack development. Proficient in ReactJS, with experience using hooks, state management, and component-based design. Strong command of Django and Django Rest Framework for building APIs and backend logic. Experience integrating Python scripts for ML/DL inference within scalable systems. Solid understanding and practical experience with AWS infrastructure, including EC2, S3, Lambda, and monitoring tools. Familiarity with CI/CD pipelines, Docker, and automated deployment processes. Strong grasp of REST principles, API design, and third-party API integration. Proficient in using Git and collaborative development workflows. Experience with MySQL or similar relational databases. Strong analytical and troubleshooting abilities with a collaborative mindset. Key Skills Frontend: ReactJS, JavaScript, TypeScript Backend: Python, Django, Django REST Framework DevOps/Cloud: AWS (EC2, Lambda, S3), Docker, CI/CD pipelines Other: REST API design, Git, MySQL, Solution Architecture
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Job Title: Python GenAI Developer Location: Remote About the Role: We’re seeking a skilled Python GenAI Developer to build and scale Generative AI solutions using LLMs, diffusion models, and modern ML frameworks. Key Responsibilities: Build Python apps leveraging GenAI models and integrate LLM APIs (OpenAI, Cohere, Anthropic). Fine-tune open-source models using Hugging Face. Collaborate with ML teams to create training/inference pipelines. Design prompt strategies and ensure engineering best practices (CI/CD, testing). Optimize models for latency/memory. Translate business needs into scalable solutions. Implement RAG pipelines (LangChain, LlamaIndex), LLM evals (Prompt Flow, Langfuse), and observability tools (Galileo, PromptLayer). Ensure app reliability, performance, and cost-efficiency. Must-Have Qualifications: Proficient in Python, OOP, REST, Git, Docker, and CI/CD. Strong experience with LLMs, prompt engineering, LangChain, LlamaIndex, FastAPI. Familiarity with PyTorch, Transformers, Hugging Face, RAG strategies, and cloud deployment (AWS, Azure, GCP). Knowledge of LLMOps (monitoring, latency optimization). Skilled in building and evaluating ML/NLP models with human-in-the-loop feedback. Good-to-Have: Vector DBs (FAISS, Pinecone), agentic workflows (LangGraph, AutoGen). Real-time streaming AI apps. Open-source GenAI contributions. Front-end knowledge: React, Streamlit, Gradio. Soft Skills: Strong analytical, collaborative, and problem-solving skills. Proactive learner with excellent communication.
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
Kochi, Kerala, India
On-site
Highly skilled Senior Machine Learning Engineer with expertise in Deep Learning, Large Language Models (LLMs), and MLOps/LLMOps to design, optimize, and deploy cutting-edge AI solutions. The ideal candidate will have hands-on experience in developing and scaling deep learning models, fine-tuning LLMs/ (e.g., GPT, Llama), and implementing robust deployment pipelines for production environments. Responsibilities Model Development & Fine-Tuning: - Design, train, fine-tune and optimize deep learning models (CNNs, RNNs, Transformers) for NLP, computer vision, or multimodal applications. - Fine-tune and adapt Large Language Models (LLMs) for domain-specific tasks (e.g., text generation, summarization, semantic similarity). - Experiment with RLHF (Reinforcement Learning from Human Feedback) and other alignment techniques. Deployment & Scalability (MLOps/LLMOps): - Build and maintain end-to-end ML pipelines for training, evaluation, and deployment. - Deploy LLMs and deep learning models in production environments using frameworks like FastAPI, vLLM, or TensorRT. - Optimize models for low-latency, high-throughput inference (eg., quantization, distillation, etc.). - Implement CI/CD workflows for ML systems using tools like MLflow, Kubeflow. Monitoring & Optimization: - Set up logging, monitoring, and alerting for model performance (drift, latency, accuracy). - Work with DevOps teams to ensure scalability, security, and cost-efficiency of deployed models. Required Skills & Qualifications: - 5-7 years of hands-on experience in Deep Learning, NLP, and LLMs. - Strong proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers, and LLM frameworks. - Experience with model deployment tools (Docker, Kubernetes, FastAPI). - Knowledge of MLOps/LLMOps best practices (model versioning, A/B testing, canary deployments). - Familiarity with cloud platforms (AWS, GCP, Azure). Preferred Qualifications: - Contributions to open-source LLM projects.
Posted 2 weeks ago
5.0 years
0 Lacs
India
Remote
Client Type: US Client Location: Remote The hourly rate is negotiable. About the Role We’re creating a new certification: Google AI Ecosystem Architect (Gemini & DeepMind) - Subject Matter Expert . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement.
Posted 2 weeks ago
3.0 years
0 Lacs
Delhi, India
Remote
Position Overview : We are looking for a passionate and skilled AI/ML Engineer with strong MLOps expertise to join our product engineering team. You will be responsible for developing and deploying scalable machine learning solutions that power our content-commerce-collaboration platform used by creators, brands, and consumers. Responsibilities : Design, train, and optimize ML/DL models for personalization, content understanding, search, recommendations, and fraud detection. Develop multimodal pipelines handling video, image, audio, and text inputs. Create embedding workflows and integrate with vector databases like Pinecone for real-time inference. Architect scalable, asynchronous inference systems using Docker, ECS Fargate, S3, Step Functions, SQS, and Lambda. Build CI/CD pipelines using GitHub Actions and AWS CodePipeline for ML lifecycle automation. Monitor model performance using Prometheus, Grafana, OpenTelemetry, and CloudWatch. Develop reusable infrastructure templates for logging, versioning, and evaluation. Secure and manage data using AWS services, including S3, EFS, ElastiCache, and RDS. Requirements : Troubleshoot and resolve complex system issues 3+ years of experience in ML engineering with proven MLOps exposure. Proficiency in Python with frameworks like TensorFlow, PyTorch, and Scikit-learn. Experience with Docker, AWS (ECS, Fargate, Lambda, S3), and CI/CD pipelines. Familiarity with gRPC microservices, REST APIs, and async job processing. Hands-on experience with vector databases such as Pinecone or FAISS. Strong problem-solving and debugging skills. Proactive communicator with the ability to work both independently and collaboratively. Nice to have: Experience with NestJS or Node.js, streaming media embedding, and observability tools (OpenTelemetry, X-Ray, ELK stack). What We Offer: Opportunity to work on cutting-edge tech in media + commerce, + social stack. Flat hierarchy, fast-paced product innovation cycle. Wellness support, flexible hours, and a remote-first policy. About Creator Bridge : Creato is a next-generation social commerce platform integrating content, collaboration, and e-commerce. Our mission is to empower creators, brands, and consumers by providing a seamless ecosystem where content meets commerce.
Posted 2 weeks ago
6.0 years
3 - 8 Lacs
Thiruvananthapuram
On-site
Job Requirements Key Responsibilities Design and Development: Design and implement AI/ML-based applications tailored for embedded and edge hardware. Develop end-to-end pipelines for model training, conversion, and deployment. Customize neural network architectures for edge-specific applications such as object detection, classification, and segmentation. Model Optimization and Deployment: Port and optimize AI models to meet performance and memory constraints on edge platforms. Apply quantization (e.g., INT8), pruning, and layer fusion techniques to improve model efficiency. Convert models between various formats such as ONNX, TensorRT, TVM, TFLite, and DRP-AI. Performance Tuning and Profiling: Analyze model bottlenecks and tune for latency, throughput, and power efficiency. Run inference performance profiling on hardware targets and iterate for improvements. Testing and Debugging: Validate model accuracy and performance post-optimization. Debug and troubleshoot model behavior discrepancies across frameworks and devices. Documentation and Research: Maintain documentation for all model lifecycle stages and optimization steps. Stay updated with latest AI compiler advancements and deployment trends in edge AI. Work Experience Must Have : Bachelor's/Master’s degree in Computer Science, Electronics, or AI-related field. 6+ years in AI/ML model development with experience in real-world applications. Proficient in Python, C++ and deep learning libraries (TensorFlow, PyTorch, Keras). Solid understanding of CNNs, FCNs, and their applications in computer vision. Practical knowledge of model optimization workflows (quantization, pruning, etc.). Experience with ONNX, TVM, TensorRT, DRP-AI, TFLite, OpenCV, etc. Experience with deployment on edge devices like Jetson, RZ/V2H, or STM32. Strong understanding of constraints (compute, memory, power) in edge environments. Good to Have: Exposure to embedded Linux or RTOS environments. Familiarity with low-level model debugging, calibration tools, and inference engines. Experience with Continuous Integration tools such as Git, Jenkins, or similar. Understanding of hardware accelerators (GPU, NPU, DRP-AI, etc.).
Posted 2 weeks ago
0 years
5 - 11 Lacs
Thiruvananthapuram
On-site
Required Skills We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities 1. AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. 2. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. 3. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. 4. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. 5. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. 6. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) 7. Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. 8. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Technical Skills Proficient in Python , with strong knowledge of libraries like NumPy, Pandas, SciPy, and Matplotlib for data manipulation and visualization. Expertise in TensorFlow, PyTorch, Scikit-learn, and Keras for building, training, and optimizing machine learning and deep learning models. Hands-on experience with Transformer libraries like Hugging Face Transformers, OpenAI APIs, and LangChain for NLP tasks. Practical knowledge of CNN architectures (e.g., YOLO, ResNet, VGG) and Vision Transformers (ViT) for Computer Vision applications. Proficiency in developing and deploying Diffusion Models like Stable Diffusion, SDX, and other generative AI frameworks. Experience with RLHF (Reinforcement Learning with Human Feedback) and reinforcement learning algorithms for optimizing AI behaviors. Proficiency with Docker and Kubernetes for containerization and orchestration of AI workflows. Hands-on experience with MLOps tools such as MLFlow for model tracking and CI/CD integration in AI pipelines. Expertise in setting up monitoring tools like Prometheus and Grafana to track model performance, latency, throughput, and drift. Knowledge of performance optimization techniques, such as quantization, pruning, and knowledge distillation, to improve model efficiency. Experience in building data pipelines for preprocessing, cleaning, and transforming large datasets using tools like Apache Airflow, Luigi Familiarity with cloud-based storage systems (e.g., AWS S3, Google BigQuery) for efficient data handling in AI workflows. Strong understanding of cloud platforms (AWS, GCP, Azure) for deploying and scaling AI solutions. Knowledge of advanced search technologies such as Elasticsearch for indexing and querying large datasets. Familiarity with edge deployment frameworks and optimization for resource-constrained environments Qualifications · Bachelor's or Master's degree in Data Science, Statistics, Mathematics, Computer Science, or a related field. Experience: 2.5 to 5 yrs Location: Trivandrum Job Type: Full-time Pay: ₹500,000.00 - ₹1,100,000.00 per year Benefits: Health insurance Provident Fund Location Type: In-person Schedule: Day shift Work Location: In person
Posted 2 weeks ago
3.0 - 5.0 years
6 - 8 Lacs
Thiruvananthapuram
On-site
Experience Required: 3-5 years of hands-on experience in full-stack development, system design, and supporting AI/ML data-driven solutions in a production environment. Key Responsibilities Implementing Technical Designs: Collaborate with architects and senior stakeholders to understand high-level designs and break them down into detailed engineering tasks. Implement system modules and ensure alignment with architectural direction. Cross-Functional Collaboration: Work closely with software developers, data scientists, and UI/UX teams to translate system requirements into working code. Clearly communicate technical concepts and implementation plans to internal teams. Stakeholder Support: Participate in discussions with product and client teams to gather requirements. Provide regular updates on development progress and raise flags early to manage expectations. System Development & Integration: Develop, integrate, and maintain components of AI/ML platforms and data-driven applications. Contribute to scalable, secure, and efficient system components based on guidance from architectural leads. Issue Resolution: Identify and debug system-level issues, including deployment and performance challenges. Proactively collaborate with DevOps and QA to ensure resolution. Quality Assurance & Security Compliance: Ensure that implementations meet coding standards, performance benchmarks, and security requirements. Perform unit and integration testing to uphold quality standards. Agile Execution: Break features into technical tasks, estimate efforts, and deliver components in sprints. Participate in sprint planning, reviews, and retrospectives with a focus on delivering value. Tool & Framework Proficiency: Use modern tools and frameworks in your daily workflow, including AI/ML libraries, backend APIs, front-end frameworks, databases, and cloud services, contributing to robust, maintainable, and scalable systems. Continuous Learning & Contribution: Keep up with evolving tech stacks and suggest optimizations or refactoring opportunities. Bring learnings from the industry into internal knowledge-sharing sessions. Proficiency in using AI-copilots for Coding: Adaptation to emerging tools and knowledge of prompt engineering to effectively use AI for day-to-day coding needs. Technical Skills Hands-on experience with Python-based AI/ML development using libraries such as TensorFlow, PyTorch, scikit-learn, or Keras. Hands-on exposure to self-hosted or managed LLMs, supporting integration and fine-tuning workflows as per system needs while following architectural blueprints. Practical implementation of NLP/CV modules using tools like SpaCy, NLTK, Hugging Face Transformers, and OpenCV, contributing to feature extraction, preprocessing, and inference pipelines. Strong backend experience using Django, Flask, or Node.js, and API development (REST or GraphQL). Front-end development experience with React, Angular, or Vue.js, with a working understanding of responsive design and state management. Development and optimization of data storage solutions, using SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra), with hands-on experience configuring indexes, optimizing queries, and using caching tools like Redis and Memcached. Working knowledge of microservices and serverless patterns, participating in building modular services, integrating event-driven systems, and following best practices shared by architectural leads. Application of design patterns (e.g., Factory, Singleton, Observer) during implementation to ensure code reusability, scalability, and alignment with architectural standards. Exposure to big data tools like Apache Spark, and Kafka for processing datasets. Familiarity with ETL workflows and cloud data warehouse, using tools such as Airflow, dbt, BigQuery, or Snowflake. Understanding of CI/CD, containerization (Docker), IaC (Terraform), and cloud platforms (AWS, GCP, or Azure). Implementation of cloud security guidelines, including setting up IAM roles, configuring TLS/SSL, and working within secure VPC setups, with support from cloud architects. Exposure to MLOps practices, model versioning, and deployment pipelines using MLflow, FastAPI, or AWS SageMaker. Configuration and management of cloud services such as AWS EC2, RDS, S3, Load Balancers, and WAF, supporting scalable infrastructure deployment and reliability engineering efforts. Personal Attributes Proactive Execution and Communication: Able to take architectural direction and implement it independently with minimal rework with regular communication with stakeholders Collaboration: Comfortable working across disciplines with designers, data engineers, and QA teams. Responsibility: Owns code quality and reliability, especially in production systems. Problem Solver: Demonstrated ability to debug complex systems and contribute to solutioning. Key: Python, Django, Django ORM, HTML, CSS, Bootstrap, JavaScript, jQuery, Multi-threading, Multi-processing, Database Design, Database Administration, Cloud Infrastructure, Data Science, self-hosted LLMs Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Relevant certifications in cloud or machine learning are a plus. Package: 6-11 LPA Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹800,000.00 per year Benefits: Health insurance Life insurance Provident Fund Schedule: Day shift Monday to Friday
Posted 2 weeks ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Viziverse is creating a whole new genre of mobile video games! We're an exciting new startup founded by an experienced tech entrepreneur with multiple successful exits and degrees from MIT and Harvard Business School. As a Computer Vision Engineer , you’ll have a chance to develop and implement computer vision such as gesture recognition, pose estimation, segmentation, multimodal AI, etc. for a novel approach to video-recognition based video games that are fun, cool, and showcase and build upon our technology’s innovative capabilities. Moreover, you’ll have the opportunity to work directly with the founder and to work on things that will have a direct impact on the trajectory of both the technology and the company. What You’ll Do: Improve and innovate upon existing approaches for segmentation, pose-estimation, gesture recognition, SLAM, etc. for novel types of video games Explore application of latest cutting-edge technologies such as Multimodal AI and VLM (Vision Language Models), etc If interested and capable, implement (as well as propose) games or other demos in C#/Unity to utilize new video recognition capabilities Must-Haves: Experience with C# development Experience implementing and utilizing AI / ML models in C# Experience with scripting languages (Python) Ability to write clean readable code Ability to work independently and efficiently manage one’s time Ability to communicate effectively and work well with others Nice-to-Haves: Experience with Unity development Experience with inference in Unity Experience with Human Body Segmentation, Pose Estimation, and Gesture Recognition Experience with Multimodal AI and VLM's (Vision Language Models) Experience with Computational Geometry, Mesh Generation, 3D Reconstruction, NeRF, Gaussian Splats, etc. Experience with Monocular Visual SLAM (As relevant for any of the above, please let us know what libraries you’ve used and/or if you’ve developed your own algorithms) Why join the Viziverse team? Make a novel platform that is the first to implement novel video recognition based gaming Work with an accomplished startup founder to directly impact the industry Join startup and get an early seat on a rocket ship to something potentially huge! Ready? If working on something fun, entrepreneurial, innovative, and industry-changing sounds appealing to you, then let’s talk. (And we’ll of course keep it confidential.) (Keywords: Computer Vision, CV, Machine Vision, Multimodal AI, VLM, Human Computer Interaction, HCI, Extended Reality, XR, Virtual Reality, VR, Augmented Reality, AR, Mixed Reality, MR, Metaverse) Powered by JazzHR 8RLw0W8cjh
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Hyderābād
On-site
Req ID: 328025 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Specialist to join our team in Bangalore, Karnātaka (IN-KA), India (IN). Experience: 4-6 years Key Responsibilities: Build machine learning models to predict asset health, process anomalies, and optimize operations. Work on sensor data pre-processing, model training, and inference deployment. Collaborate with simulation engineers for closed-loop systems. Skills Required: Experience in time-series modeling, regression/classification, and anomaly detection. Familiarity with Python, scikit-learn, TensorFlow/PyTorch. Experience with ML Ops on Azure ML, Databricks, or similar. Understanding of manufacturing KPIs (e.g., OEE, MTBF, cycle time). About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 2 weeks ago
9.0 years
0 Lacs
India
On-site
Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Kim Tarcelo Sponsorship Available: No Relocation Assistance Available: No Job Responsibilities: You analyze business and technical needs, translating processes into practice using the current Information Technology toolsets. You implement new solutions through configuration and the creation of functional specifications, facilitating solution realization. You conduct full integration testing of new or changed business/technical processes, and develop business/technical cycle tests for use by stakeholders. You document all process changes and share relevant knowledge with stakeholders. You troubleshoot, investigate, and persist in finding solutions to problems with unknown causes, where precedents do not exist, by applying logic, inference, creativity, and initiative. You provide cross-functional support and maintenance for the responsible business/technical areas. You conduct cost/benefit analyses by evaluating alternative design approaches to determine the best-balanced solution. The best-balanced solution satisfies immediate stakeholder needs, meets system requirements, and facilitates subsequent change. You assume a leadership role in small initiatives, playing the key contributor, facilitator, or group lead. Qualifications: You hold a Bachelor's degree in MIS, Computer Science, Engineering, Technology, Business Administration, or, in lieu of a degree, have 9 years of IT experience. You have at least 4 years of experience in IT, with a minimum of 2 years of experience in SAP. You possess techno-functional knowledge of SAP IT applications in the relevant business area. You have the ability to understand business processes and needs, delivering prompt, efficient, and high-quality service to the business. You have strong analytical and problem-solving skills, with excellent written and verbal communication skills and a strong command of English. You have solid solution design capabilities across functions and applications. You are able to work flexible hours as required for special occasions. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 74,000 people and manufactures its products in 57 facilities in 23 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate
Posted 2 weeks ago
1.0 years
3 - 9 Lacs
Hyderābād
On-site
At OneNote, we are driven by a bold vision: "To help activate a second brain for everyone to realize their full potential." We are embarking on the next chapter of our evolution via Copilot Notebooks: notebooks designed for an AI powered future. We're building solutions that make capturing ideas seamless, understanding complex information intuitive, and taking informed action Whether it’s brainstorming the next big idea, organizing life’s intricate details, or simply finding clarity amid complexity, OneNote Copilot Notebooks is here. Join us as we reshape the future of AI by turning possibilities into realities — and help millions of users across the globe activate their second brain. We are looking for a Data Scientist II to join our team and help us shape the future of OneNote. In this role, you will partner with product, design, and engineering teams to deliver actionable insights, build experimentation frameworks, and identify growth opportunities. Your work will directly influence product development and user engagement strategies across millions of users. Our culture thrives on innovation, inclusion, growth mindset, and a strong sense of purpose. If you’re passionate about using data to drive decisions and want to work on a high-impact product at the cutting edge of productivity and AI, we’d love to hear from you. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities You will understand each customer’s business goals and learn best practices for identifying growth opportunities. You’ll also examine projects through a customer-oriented focus and manage customer expectations regarding project progress. Collaborate with cross-functional teams to define metrics, design experiments, and uncover user behaviors that influence product adoption and growth. Use statistical analysis, data mining, and machine learning techniques to generate insights from large-scale structured and unstructured data. Acquire the data necessary for your project plan and use querying, visualization, and reporting techniques to describe that data. You’ll also explore data for key attributes and collaborate with others to perform data science experiments using established methodologies. Model techniques, select the correct tool and approach to complete objectives, and evaluate the output for statistical and business significance. You’ll also analyze model performance and incorporate customer feedback into its evaluation. Build dashboards and reports that enable product teams to track key performance indicators and make data-informed decisions. Develop and iterate on models to identify high-value scenarios, recommend features, and improve user retention and engagement. Communicate insights clearly and effectively to both technical and non-technical stakeholders, influencing product strategy and priorities. Contribute to a culture of data excellence by championing best practices in experimentation, measurement, and data governance. Understanding the current state of the industry, including current trends, so that you can contribute to thought leadership best practices. Qualifications Required Qualifications: Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical tec OR Bachelor's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 2+ years data-science experience (e.g., managing structured and unstructured data, applying statistical tec OR equivalent experience. 1+ year(s) customer-facing, project-delivery experience, professional services, and/or consulting experience. Proficiency in SQL and at least one programming language such as Python or R. Experience with business intelligence tools (e.g., Power BI, Tableau). Strong statistical knowledge and experience with A/B testing, causal inference, or other experimentation methodologies. Experience working with large datasets and big data technologies (e.g., Azure Data Lake, Synapse, Databricks, Spark, or equivalent). Ability to work independently and collaboratively in a fast-paced, ambiguous environment. Candidate must be comfortable manipulating and analyzing complex, high dimensional data from varying sources to solve difficult problems. Candidate must be able to communicate complex ideas and concepts to leadership and deliver results. Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred Qualifications: Experience in product analytics, growth strategy, or user engagement optimization. Familiarity with Microsoft Office ecosystem or productivity tools is a plus. Experience with Copilot/LLM-related user scenarios or AI-driven products. Strong storytelling and communication skills, with the ability to turn complex data into clear, actionable narratives for executives and product teams. Passion for building delightful and impactful user experiences with measurable outcomes. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 2 weeks ago
0 years
0 Lacs
Hyderābād
On-site
Apple is where individual imaginations gather together, committing to the values that lead to great work. Every new product we build, service we create, or Apple Store experience we deliver is the result of us making each other’s ideas stronger. That happens because every one of us shares a belief that we can make something wonderful and share it with the world, changing lives for the better! It’s the diversity of our people and their thinking that inspires the innovation that runs through everything we do. When we bring everybody in, we can do the best work of our lives. Join Apple, and help us leave the world better than we found it. At Apple, we build products that enrich the lives of millions of customers. Our SAP team is part of the global Information Systems & Technology (IS&T) organization. We build systems that support customer ordering, fulfilment and service of Apple products. We work with multiple business functions. Our teams develop a collaborative environment with creative, expert & fun people using brand-new technologies. Engineering perfection is encouraged. Come join us for this once in a lifetime opportunity in building solutions for Apple that has worldwide impact. Description The successful candidate will work closely across business and technical teams delivering sophisticated projects in Apple’s fast-paced environment. The role includes responsibility for all aspects of a project, including: Propose appropriate designs to meet the business requirements. Presenting the solution for design reviews and business playbacks. Build and review detailed technical documents for new or existing enhancements. Collaborating on deliverables, dependencies and risks. Status reporting within the team and to management throughout the project lifecycle. Co-ordinating cutover and implementation activities across teams. Minimum Qualifications The successful candidate will work closely across business and technical teams delivering sophisticated projects in Apple’s fast-paced environment. The role includes responsibility for all aspects of a project, including: Propose appropriate designs to meet the business requirements. Presenting the solution for design reviews and business playbacks. Build and review detailed technical documents for new or existing enhancements. Collaborating on deliverables, dependencies and risks. Status reporting within the team and to management throughout the project lifecycle. Co-ordinating cutover and implementation activities across teams. Preferred Qualifications At least 1 full cycle project experience implementing AI or ML projects Possess excellent problem-solving & analytical skills. Proven record of leading projects with timely delivery and working experience with global. Familiarity with ML libraries and frameworks, including TensorFlow, PyTorch, Pandas, and Scikit-learn. Ability to fine tune an LLM is a Plus. Understanding of RAG and graph-based RAG methodologies. Hands on experience with LangChain and LangGraph is a plus. Skills in rapid prototyping and experimentation with generative models Experience in performance optimisation, particularly in enhancing model performance and inference speed for real-time applications is a plus. Submit CV
Posted 2 weeks ago
0 years
1 - 9 Lacs
India
On-site
Job Title: AI/ML Cloud Engineer – Realistic Video Generation Location: Zirakpur, Mohali, Punjab, India Type: Full-Time Industry: AI & Cloud Solutions Experience Level: Mid to Senior Salary is no bar for ideal candidate. About Us We are a forward-thinking company leveraging cutting-edge AI and cloud technologies to deliver next-generation solutions for our clients. Our focus is on creating hyper-realistic, AI-generated videos for a wide range of industries including marketing, entertainment, training, and more. We're seeking a talented AI/ML Cloud Engineer who can design, build, and optimize scalable pipelines to bring these videos to life. Your Role As our AI/ML Cloud Engineer, you'll be responsible for building end-to-end systems that enable the generation of realistic videos using AI models (e.g., diffusion, generative adversarial networks, neural rendering). You will work at the intersection of machine learning, cloud infrastructure, and video processing. Responsibilities Develop and deploy AI/ML models for realistic video generation Design cloud-based pipelines for scalable media processing Integrate video synthesis tools with client-facing applications Optimize inference performance using GPU acceleration and distributed computing Collaborate with designers, data scientists, and engineers to fine-tune model outputs Ensure secure, efficient, and maintainable cloud infrastructure (AWS, GCP, or Azure) Stay updated with the latest in generative AI and video synthesis research Requirements Proven experience in AI/ML, particularly in generative models (GANs, VAEs, Diffusion Models) Solid understanding of video processing and computer vision techniques Hands-on experience with frameworks like PyTorch, TensorFlow, or similar Proficiency with cloud platforms (AWS, GCP, Azure) Familiarity with media encoding/decoding and streaming protocols Strong programming skills (Python, C++, etc.) Ability to work independently and manage multiple projects Preferred Qualifications Experience with any tools like Google Veo 3, RunwayML, Sora, DeepMotion, or similar video generation platforms Knowledge of 3D rendering engines or motion capture workflows Background in synthetic media ethics and responsible AI development What We Offer Competitive salary and equity options Work from office position with flexible working hours Access to powerful compute resources Opportunity to work on cutting-edge AI applications Supportive, innovative team environment How to Apply Send your resume, portfolio (if applicable), and a short note on your experience with AI-generated video to info@estina.in Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹75,000.00 per month Schedule: Day shift Morning shift Work Location: In person
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Hello, Truecaller is calling you from Bangalore, India! Ready to pick up? Our goal is to make communication smarter, safer, and more efficient, all while building trust everywhere. We're all about bringing you smart services with a big social impact, keeping you safe from fraud, harassment, scam calls or messages, so you can focus on the conversations that matter. Top 20 most downloaded apps globally, and world’s #1 caller ID and spam-blocking service for Android and iOS, with extensive AI capabilities, with more than 450 million active users per month. Founded in 2009, listed on Nasdaq OMX Stockholm and is categorized as a Large Cap. Our focus on innovation, operational excellence, sustainable growth, and collaboration has resulted in consistently high profitability and strong EBITDA margins. A team of 400 people from ~35 different nationalities spread across our headquarters in Stockholm and offices in Bangalore, Mumbai, Gurgaon and Tel Aviv with high ambitions. We in the Insights Team are responsible for SMS Categorization, Fraud detection and other Smart SMS features within the Truecaller app. The OTP & bank notifications, bill & travel reminder alerts are some examples of the Smart SMS features. The team has developed a patented offline text parser that powers all these features and the team is also exploring cutting edge technologies like LLM to enhance the Smart SMS features. The team’s mission is to become the World’s most loved and trusted SMS app which is aligned with Truecaller’s vision to make communication safe and efficient. Smart SMS is used by over 90M users every day. As an ML Engineer , you will be responsible for collecting, organizing, analyzing, and interpreting Truecaller data with a focus on NLP. In this role, you will be working hands-on to optimize the training and deployment of ML models to be quick and cost-efficient. Also, you will be pivotal in advancing our work with large language models and in-device models across diverse regions. Your expertise will enhance our natural language processing, machine learning, and predictive analytics capabilities. What You Bring In 3+ years in machine learning engineering, with hands-on involvement in feature engineering, model development, and deployment. Experience in Natural Language Processing (NLP), with a deep understanding of text processing, model development, and deployment challenges in the domain. Proven ability to develop, deploy, and maintain machine learning models in production environments, ensuring scalability, reliability, and performance. Strong familiarity with ML frameworks like TensorFlow, PyTorch, and ONNX, and experience in tech stack such as Kubernetes, Docker, APIs, Vertex AI, GCP. Experience deploying models across backend and mobile platforms. Fine-tune and optimize LLMs prompts for domain-specific applications Ability to optimize feature engineering, model training, and deployment strategies for performance and efficiency. Strong SQL and statistical skills. Programming knowledge in at least one language, such as Python or R. Preferably python. Knowledge of machine learning algorithms. Excellent teamwork and communication skills, with the ability to work cross-functionally with product, engineering, and data science teams. Good to have the knowledge in retrieval-based pipelines to enhance LLM performance The Impact You Will Create Collaborate with Product and Engineering to scope, design, and implement systems that solve complex business problems ensuring they are delivered on time and within scope. Design, develop, and deploy state-of-the-art NLP models, contributing directly to message classification and fraud detection at scale for millions of users. Leverage cutting-edge NLP techniques to enhance message understanding, spam filtering, and fraud detection, ensuring a safer and more efficient messaging experience. Build and optimize ML models that can efficiently handle large-scale data processing while maintaining accuracy and performance. Work closely with data scientists and data engineers to enable rapid experimentation, development, and productionization of models in a cost-effective manner. Streamline the ML lifecycle, from training to deployment, by implementing automated workflows, CI/CD pipelines, and monitoring tools for model health and performance. Stay ahead of advancements in ML and NLP, proactively identifying opportunities to enhance model performance, reduce latency, and improve user experience. Your work will directly impact millions of users, improving message classification, fraud detection, and the overall security of messaging platforms. It Would Be Great If You Also Have Understanding of Conversational AI Deploying NLP models in production Working knowledge of GCP components Cloud-based LLM inference with Ray, Kubernetes, and serverless architectures. Life at Truecaller - Behind the code: https://www.instagram.com/lifeattruecaller/ Sounds like your dream job? We will fill the position as soon as we find the right candidate, so please send your application as soon as possible. As part of the recruitment process, we will conduct a background check. This position is based in Bangalore , India. We only accept applications in English. What We Offer A smart, talented and agile team: An international team where ~35 nationalities are working together in several locations and time zones with a learning, sharing and fun environment. A great compensation package: Competitive salary, 30 days of paid vacation, flexible working hours, private health insurance, parental leave, telephone bill reimbursement, Udemy membership to keep learning and improving and Wellness allowance. Great tech tools: Pick the computer and phone that you fancy the most within our budget ranges. Office life: We strongly believe in the in-person collaboration and follow an office-first approach while offering some flexibility. Enjoy your days with great colleagues with loads of good stuff to learn from, daily lunch and breakfast and a wide range of healthy snacks and beverages. In addition, every now and then check out the playroom for a fun break or join our exciting parties and or team activities such as Lab days, sports meetups etc. There something for everyone! Come as you are: Truecaller is diverse, equal and inclusive. We need a wide variety of backgrounds, perspectives, beliefs and experiences in order to keep building our great products. No matter where you are based, which language you speak, your accent, race, religion, color, nationality, gender, sexual orientation, age, marital status, etc. All those things make you who you are, and that’s why we would love to meet you. Job info Location Bengaluru, Karnataka, India Category Data Science Team Insights Posted today
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Noida
On-site
Req ID: 328025 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Specialist to join our team in Bangalore, Karnātaka (IN-KA), India (IN). Experience: 4-6 years Key Responsibilities: Build machine learning models to predict asset health, process anomalies, and optimize operations. Work on sensor data pre-processing, model training, and inference deployment. Collaborate with simulation engineers for closed-loop systems. Skills Required: Experience in time-series modeling, regression/classification, and anomaly detection. Familiarity with Python, scikit-learn, TensorFlow/PyTorch. Experience with ML Ops on Azure ML, Databricks, or similar. Understanding of manufacturing KPIs (e.g., OEE, MTBF, cycle time). About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking AI Architect to lead the design, development, and deployment of advanced AI systems, with a strong emphasis on Large Language Models (LLMs), fine-tuning, and customer experience (CX) technologies. This role blends deep technical expertise with leadership, research, and infrastructure planning to deliver intelligent, scalable, and customer-centric AI solutions across cloud and on-premise environments. Key Responsibilities 1. LLM Development & Fine-Tuning Architect and implement scalable solutions for training and fine-tuning LLMs Apply prompt engineering, transfer learning, and optimization techniques to enhance model performance. Integrate LLMs into customer-facing applications such as chatbots, VOICE AI agents, Expert AI agents, and recommendation engines. 2. Customer Experience (CX) Technology Integration Collaborate with CX and product teams to embed AI into customer journeys, improving personalization, automation, and engagement. Design AI-driven solutions for omnichannel support, sentiment analysis, and real-time feedback loops. Ensure AI systems align with customer satisfaction goals and ethical AI principles. 3. Technical Leadership & Team Management Lead and mentor a multidisciplinary team of AI engineers, data scientists, and MLOps professionals. Drive agile development practices and foster a culture of innovation and accountability. 4. Research & Innovation Conduct and apply research in NLP, LLMs, and AI infrastructure to solve real-world customer experience problems. Contribute to publications, patents, or open-source initiatives as appropriate. Guide team and maintain detailed architecture diagrams, design documents, and technical specifications. 5. Product Roadmap & Delivery Define and execute the AI product roadmap in collaboration with engineering and business stakeholders. Manage timelines, deliverables, and cross-functional dependencies. Plan and manage GPU infrastructure for training and inference (e.g., A100, H100, L40S). 6. Deployment & MLOps Deploy AI models on Azure, AWS, and on-premise GPU clusters using containerized and scalable architectures. Integrate with CI/CD pipelines and ensure robust monitoring, logging, and rollback mechanisms. Qualifications Master’s or Ph.D. in Computer Science, AI, Machine Learning, or related field. 5+ years of experience in AI/ML, with 1.5 + years in a leadership or GEN AI architect role. Proven experience with LLMs, Transformers, and fine-tuning frameworks (e.g., Hugging Face). Strong understanding of customer experience platforms and how AI can enhance them. Proficiency in Python, PyTorch, TensorFlow, and MLOps tools. Experience with cloud platforms (Azure, AWS) and on-premise GPU infrastructure. Why Join Us? Shape the future of AI-powered customer experiences, visibility to global customer AI deployments. Lead a high-impact team working on state-of-the-art technologies. Competitive compensation, and continuous learning opportunities
Posted 2 weeks ago
8.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It With Pride. You will be crucial in supporting our business by creating valuable, actionable insights about the data, and communicating your findings to the business. You will work with various stakeholders to determine how to use business data for business solutions/insights. How You Will Contribute You will: Analyze and derive value from data through the application methods such as mathematics, statistics, computer science, machine learning and data visualization. In this role you will also formulate hypotheses and test them using math, statistics, visualization and predictive modeling Understand business challenges, create valuable actionable insights about the data, and communicate your findings to the business. After that you will work with stakeholders to determine how to use business data for business solutions/insights Enable data-driven decision making by creating custom models or prototypes from trends or patterns discerned and by underscoring implications. Coordinate with other technical/functional teams to implement models and monitor results Apply mathematical, statistical, predictive modelling or machine-learning techniques and with sensitivity to the limitations of the techniques. Select, acquire and integrate data for analysis. Develop data hypotheses and methods, train and evaluate analytics models, share insights and findings and continues to iterate with additional data Develop processes, techniques, and tools to analyze and monitor model performance while ensuring data accuracy Evaluate the need for analytics, assess the problems to be solved and what internal or external data sources to use or acquire. Specify and apply appropriate mathematical, statistical, predictive modelling or machine-learning techniques to analyze data, generate insights, create value and support decision making Contribute to exploration and experimentation in data visualization and you will manage reviews of the benefits and value of analytics techniques and tools and recommend improvements What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: Strong quantitative skillset with experience in statistics and linear algebra. A natural inclination toward solving complex problems Knowledge/experience with statistical programming languages including R, Python, SQL, etc., to process data and gain insights from it Knowledge of machine learning techniques including decision-tree learning, clustering, artificial neural networks, etc., and their pros and cons Knowledge and experience in advanced statistical techniques and concepts including, regression, distribution properties, statistical testing, etc. Good communication skills to promote cross-team collaboration Multilingual coding knowledge/experience: Java, JavaScript, C, C++, etc. Experience/knowledge in statistics and data mining techniques including random forest, GLM/regression, social network analysis, text mining, etc. Ability to use data visualization tools to showcase data for stakeholders We're looking for a Senior Data Scientist to lead data science engagements within Mondelēz International. This role involves owning the full lifecycle of data science application projects, from concept to deployment and optimization. You'll also be a strategic advisor, shaping our data science capabilities, and defining internal standards for application development. Your responsibilities also include the enterprise-level data science applications design and recommending the right tools and technologies. A good balance between traditional AI/ML and GenAI experience is preferred. You'll play a critical role in driving innovation, maximizing value, and fostering responsible AI adoption in D&A function. Key Responsibilities: Lead Data Science Application Development/Deployment: Lead the full lifecycle of data science projects, from ideation and design to deployment and optimization, whether developed in-house, through partner collaboration or buying off-the-shelf. Advisory AI/GenAI: Provide strategic advice on the evolution of our AI/GenAI capabilities to match company goals. Keeping up with the latest GenAI trend. Standards and Governance: Help establish/refresh and enforce programmatic approaches, governance frameworks and best practices for effective data science application building. Technology Evaluation and Recommendation: Evaluate and recommend the most appropriate AI/GenAI tools, technologies. Knowledge Sharing and Mentoring: Share knowledge and expertise with other team members, mentor junior data scientists, and spearhead the development of a strong AI community within Mondelēz. Skills and Experiences: Deep understanding of data science methodologies and implications: proficiency in machine learning, deep learning, statistical modelling, optimization, causal inference etc. Hands-on experience in cloud environment (8 years): Cloud platform, cloud-based data storage, processing, AI/ML model building, model life cycle, process orchestration, cost optimization etc. LLM Application architecture & Integration (3 years): Hands-on experience building RAG applications, clear understanding of the underline technologies. Cloud Based Deployment & Scaling: Practical experience deploying and scaling data science (including GenAI) applications in cloud environment. Familiar with scaling strategies for LLMs, and integration with other application. Collaboration with cross-functional teams/stakeholder management (5 years): Proven ability to collaborate effectively with cross-functional teams. Excellent communication skills, both written and verbal to articulate technical concepts. A big emphasis on the ability to listen and capture key information. Qualifications: Master’s degree in a Quantitative Discipline, PhD preferred. Minimum 8 years of experience in data science/AI. Minimum 2 years of GenAI experience. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Data Science Analytics & Data Science
Posted 2 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning
Posted 2 weeks ago
1.0 years
0 Lacs
Salem, Tamil Nadu, India
On-site
Job Title: AI Vision Systems Engineer – Entry Level (SkySight Platform) Location: Thoppur, Salem Department: AI & Perception Systems Experience: 0–1 year Employment Type: Full-time 🛰 About SkySight SkySight is a cutting-edge drone data management and computer vision platform developed by Raptor Aero Systems. It processes visual and geospatial data from UAV missions to deliver real-time insights, advanced object detection, and intuitive visualizations for various industries including agriculture, infrastructure, and defence. 🎯 Role Overview As a Vision System Engineer at SkySight, you will assist in developing and deploying computer vision modules, contributing to image and video processing pipelines, model integration, and visualization tools. This is a hands-on role focused on real-world perception challenges using aerial imagery. 🛠 Key Responsibilities • Assist in integrating computer vision models (e.g., object detection, segmentation) into the SkySight pipeline. • Preprocess and annotate image/video datasets for model training and evaluation. • Support development of image analysis features like heatmaps, annotated overlays, and anomaly detection. • Contribute to testing and optimizing inference pipelines for GPU/CPU. • Collaborate with AI/ML and backend teams to ensure seamless integration with cloud and dashboard services. • Participate in regular code reviews, design discussions, and innovation brainstorming. 🧠 Required Skills • Bachelor of Engineering in Computer Science, Electronics, Electrical, Mechatronics, or related field. • Basic understanding of image processing, OpenCV, and/or deep learning frameworks like PyTorch or TensorFlow. • Familiarity with Python programming (or C++ is a plus). • Exposure to working with image/video datasets. • Curiosity and willingness to learn about drone imagery, GIS, and embedded vision. 🌟 Preferred (Good to Have) • Hands-on project or internship involving computer vision or AI. • Experience working with YOLO, Mask R-CNN, or similar models. • Exposure to tools like LabelImg, Roboflow, or CVAT. • Understanding of REST APIs or basic Docker usage. 💡 What You’ll Gain • Mentorship from a strong AI and drone systems team. • Experience with real-world drone imagery pipelines. • Opportunity to grow into a full-stack vision engineer or data scientist. • Exposure to cutting-edge technologies across edge AI, geospatial analysis, and UAV autonomy. 🦅 About Raptor Aero Systems We’re a deep-tech drone company building India’s most advanced UAV platforms and autonomy stacks. SkySight is part of our vision to deliver end-to-end, production-ready solutions in aerial intelligence. Join us to shape the future of drones and visual AI.
Posted 2 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We're looking for a highly skilled and experienced Cloud AI Engineer to join our dynamic team. In this role, you'll be instrumental in designing, developing, and deploying cutting-edge artificial intelligence and machine learning solutions leveraging the full suite of Google Cloud Platform (GCP) services. Objectives of this role Lead the end-to-end development cycle of AI applications, from conceptualization and prototyping to deployment and optimization, with a core focus on LLM-driven solutions. Architect and implement highly performant and scalable AI services, effectively integrating with GCP's comprehensive AI/ML ecosystem. Collaborate closely with product managers, data scientists, and MLOps engineers to translate complex business requirements into tangible, AI-powered features. Continuously research and apply the latest advancements in LLM technology, prompt engineering, and AI frameworks to enhance application capabilities and performance. ## Responsibilities Develop and deploy production-grade AI applications and microservices primarily using Python and FastAPI, ensuring robust API design, security, and scalability. Design and implement end-to-end LLM pipelines, encompassing data ingestion, processing, model inference, and output generation. Utilize Google Cloud Platform (GCP) services extensively, including Vertex AI (Generative AI, Model Garden, Workbench), Cloud Functions, Cloud Run, Cloud Storage, and BigQuery, to build, train, and deploy LLMs and AI models. Expertly apply prompt engineering techniques and strategies to optimize LLM responses, manage context windows, and reduce hallucinations. Implement and manage embeddings and vector stores for efficient information retrieval and Retrieval-Augmented Generation (RAG) patterns. Work with advanced LLM orchestration frameworks such as LangChain, LangGraph, Google ADK, and CrewAI to build sophisticated multi-agent systems and complex AI workflows. Integrate AI solutions with other enterprise systems and databases, ensuring seamless data flow and interoperability. Participate in code reviews, establish best practices for AI application development, and contribute to a culture of technical excellence. Keep abreast of the latest advancements in GCP AI/ML services and broader AI/ML technologies, evaluating and recommending new tools and approaches. ## Required skills and qualifications Two or more years of hands-on experience as an AI Engineer with a focus on building and deploying AI applications, particularly those involving Large Language Models (LLMs). Strong programming proficiency in Python, with significant experience in developing web APIs using FastAPI. Demonstrable expertise with Google Cloud Platform (GCP), specifically with services like Vertex AI (Generative AI, AI Platform), Cloud Run/Functions, and Cloud Storage. Proven experience in prompt engineering, including advanced techniques like few-shot learning, chain-of-thought prompting, and instruction tuning. Practical knowledge and application of embeddings and vector stores for semantic search and RAG architectures. Hands-on experience with at least one major LLM orchestration framework (e.g., LangChain, LangGraph, CrewAI). Solid understanding of software engineering principles, including API design, data structures, algorithms, and testing methodologies. Experience with version control systems (Git) and CI/CD pipelines. Preferred skills and qualifications Bachelor’s or Master's degree in Computer Science Good to have: Experience with MLOps practices for deploying, monitoring, and maintaining AI models in production. Understanding of distributed computing and data processing technologies. Contributions to open-source AI projects or a strong portfolio showcasing relevant AI/LLM applications. Excellent analytical and problem-solving skills with a keen attention to detail. Strong communication and interpersonal skills, with the ability to explain complex technical concepts to non-technical stakeholders.
Posted 2 weeks ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Experience Level: 2–5 Years About GrowthZ: GrowthZ is poised for significant growth, focusing on AI enabled growth products for borderless expansion. We are building innovative digital products and experiences to build growth automation for its clients in small and medium segments across the globe. As we scale, we are seeking passionate engineers whose execution is key strength. We are looking for a skilled Backend/Data Science Engineer to join our dynamic team. You will be instrumental in building scalable APIs and integrating our services, while also leveraging your data science expertise to operationalize machine learning models and build real-time optimization systems. Cultural Fitment We're looking for individuals who embody our core values and thrive in a dynamic, impact-driven environment. If you resonate with the following, you'll be a great fit: * A "solve it " mindset: You approach challenges with the belief that "I may not know it, but I will solve it for my customer." We're less concerned with what you already know and more with your ability to learn, adapt, and innovate. * Impact over hours : We prioritize the impact you create over the number of hours you put in. Your contributions and the solutions you build are what truly matter. * Embracing challenges : You're excited by the prospect of tackling complex problems and are committed to solving them to their core. We believe in taking on difficult tasks and seeing them through. * Positive and empathetic collaboration: We foster a culture of smiling faces and mutual empathy. We believe a supportive and understanding team environment is key to our success and well-being. * Flexibility and commitment : We trust our team members. If you need a day to pursue a passion, spend time with family, or simply recharge, we encourage it—we call that a weekend, regardless of the day of the week. In return, we expect mutual agreement on timelines and unwavering commitment to delivering on those agreements, as we're dedicated to solving the world's biggest problem of driving business growth for our clients. * Passion for the journey: We're not just building disruptive technology; we're having fun while doing it! We believe in the joy of creation and the excitement of continuous learning. * Continuous learners: We believe life is all about learning every day. If you have the passion, we're committed to building skills together. Does this sound like a culture you'd thrive in? We're excited to see how your passion and drive can contribute to our team! What You'll Do: Design, build, and maintain robust and scalable APIs using NestJS (Fastify adapter) or FastAPI. Integrate backend services with various AWS components (S3, SQS, SES, RDS) and internal microservices. Apply your data science skills to operationalize machine learning models and develop real-time optimization systems. What You'll Bring: Strong programming experience in either NestJS/TypeScript or Python (with libraries like Pandas, NumPy, Scikit-learn). Solid SQL skills and a good understanding of PostgreSQL, TypeORM, and REST API principles. Practical experience with the AWS ecosystem (S3, Lambda, SQS, SES, Route53). Understanding of rule engines, condition-based logic, and decision trees in Machine Learning. Knowledge of A/B testing, causal inference, and optimization techniques is a plus. Exposure to real-time systems or streaming data (e.g., Kafka, Pub/Sub) is a bonus. Collaborative with internal and other team members for feedback Go getter attitude
Posted 2 weeks ago
5.0 years
0 Lacs
India
On-site
We’re hiring a Founding AI Engineer to help build and scale the backbone of our AI system. You’ll lead development across agent orchestration, tool execution, Model Context Protocol (MCP), API integration, and browser-based research workflows. You’ll work closely with the founder on hands-on roadmap development, rapid prototyping, and fast iteration cycles to evolve the product quickly based on real user needs. Responsibilities Build multi-agent systems capable of reasoning, tool use, and autonomous action Implement Model Context Protocol (MCP) strategies to manage complex, multi-source context Integrate third-party APIs (e.g., Crunchbase, PitchBook, CB Insights), scraping APIs, and data aggregators Develop browser-based agents enhanced with computer vision for dynamic research, scraping, and web interaction Optimize inference pipelines, task planning, and system performance Collaborate on architecture, prototyping, and iterative development Experiment with prompt chaining, tool calling, embeddings, and vector search Requirements 5+ years of experience in software engineering or AI/ML development Strong Python skills and experience with LangChain, LlamaIndex, or agentic frameworks Proven experience with multi-agent systems, tool calling, or task planning agents Familiarity with Model Context Protocol (MCP), Retrieval-Augmented Generation (RAG), and multi-modal context handling Experience with browser automation frameworks (e.g., Playwright, Puppeteer, Selenium) Cloud deployment and systems engineering experience (GCP, AWS, etc.) Self-starter attitude with strong product sense and iteration speed Bonus Points Experience with AutoGen, CrewAI, OpenAgents, or ReAct-style frameworks Background in building AI systems that blend structured and unstructured data Experience working in a fast-paced startup environment Previous startup or technical founding team experience This is a unique opportunity to work directly with an industry leader in AI to build a cutting-edge, next-generation AI system from the ground up.
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Sony Research India is driving cutting-edge research and development in various locations around the globe, including laboratories in Japan, the United States, Europe, and Asia. We endeavor to create new technology, products, and services while sustaining Sony Group’s diverse businesses in electronics, entertainment, and financial fields. For our research centre to blaze a trail in the latest technologies, we seek to foster the growth of a diverse pool of research and engineering talent and create a technology talent bank to drive research excellence worldwide. Sony Research India is offering outstanding career opportunities around frontline technologies such as AI and data analytics. What we are looking for: Highly motivated intern who can assist in our research and development efforts on multilingual translation, domain-specific evaluation, and LLM-based modeling. The ideal candidate should have prior project driven ( self or previous internships) experience in Deep Learning, Natural Language Processing, Reinforcement Learning. Key Responsibilities: Include collaborating closely with research scientists and other team members to explore and advance Neural Machine Translation (NMT) research within the scope of the project, like translation of entertainment domain content in various Indian and foreign languages. Interest or prior exposure to at least one of the following is desirable: Multilingual LLM probing and representation analysis Mixture of Experts (MoE) architectures Unsupervised or low-resource NMT Work Location: Remote/Bengaluru/Mumbai Duration of the paid internship: The paid internship will be for 6 months starting August 2025. The working hours are from 9:00 to 18:00 (Monday to Friday) full-time. Essential Education: Preferred Ph.D. candidate for this internship with thesis aligned towards causal inference or data science topics. We also welcome candidates with Bachelors and Master Degree who have sufficient project/prior internship work experience in Causal area. Skills required: Essential Skills: Good understanding in Deep Learning and Reinforcement Learning. Natural Language Processing Excellent coding skills, especially in Python and PyTorch Practical knowledge of the state-of-the-arts in LLM, Foundation Models, Neural Machine Translation. Good to have Skills: Experience in Indian Language Machine Translation Strong mathematical aptitude Papers in top-tier conferences like ICML, NeurIPS, AAAI, ACL, etc. Our Values: Dreams & Curiosity: Pioneer the future with dreams and curiosity. Diversity: Pursue the creation of the very best by harnessing diversity and varying viewpoints. Integrity & Sincerity: Earn the trust for Sony brand through ethical and responsible conduct. Sustainability: Fulfil our stakeholder responsibilities through disciplined business practices. Sony Research India is committed to equal opportunity in all its employment practices, policies and procedures and to ensuring that no worker or potential worker will receive less favourable treatment due to any characteristic protected under applicable local laws.
Posted 3 weeks ago
0.0 years
0 - 0 Lacs
Zirakpur, Punjab
On-site
Job Title: AI/ML Cloud Engineer – Realistic Video Generation Location: Zirakpur, Mohali, Punjab, India Type: Full-Time Industry: AI & Cloud Solutions Experience Level: Mid to Senior Salary is no bar for ideal candidate. About Us We are a forward-thinking company leveraging cutting-edge AI and cloud technologies to deliver next-generation solutions for our clients. Our focus is on creating hyper-realistic, AI-generated videos for a wide range of industries including marketing, entertainment, training, and more. We're seeking a talented AI/ML Cloud Engineer who can design, build, and optimize scalable pipelines to bring these videos to life. Your Role As our AI/ML Cloud Engineer, you'll be responsible for building end-to-end systems that enable the generation of realistic videos using AI models (e.g., diffusion, generative adversarial networks, neural rendering). You will work at the intersection of machine learning, cloud infrastructure, and video processing. Responsibilities Develop and deploy AI/ML models for realistic video generation Design cloud-based pipelines for scalable media processing Integrate video synthesis tools with client-facing applications Optimize inference performance using GPU acceleration and distributed computing Collaborate with designers, data scientists, and engineers to fine-tune model outputs Ensure secure, efficient, and maintainable cloud infrastructure (AWS, GCP, or Azure) Stay updated with the latest in generative AI and video synthesis research Requirements Proven experience in AI/ML, particularly in generative models (GANs, VAEs, Diffusion Models) Solid understanding of video processing and computer vision techniques Hands-on experience with frameworks like PyTorch, TensorFlow, or similar Proficiency with cloud platforms (AWS, GCP, Azure) Familiarity with media encoding/decoding and streaming protocols Strong programming skills (Python, C++, etc.) Ability to work independently and manage multiple projects Preferred Qualifications Experience with any tools like Google Veo 3, RunwayML, Sora, DeepMotion, or similar video generation platforms Knowledge of 3D rendering engines or motion capture workflows Background in synthetic media ethics and responsible AI development What We Offer Competitive salary and equity options Work from office position with flexible working hours Access to powerful compute resources Opportunity to work on cutting-edge AI applications Supportive, innovative team environment How to Apply Send your resume, portfolio (if applicable), and a short note on your experience with AI-generated video to info@estina.in Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹75,000.00 per month Schedule: Day shift Morning shift Work Location: In person
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi