AI/ML Engineer - Core Algorithm and Model Expert

0 years

0 Lacs

Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

AI/ML Engineer – Core Algorithm and Model Expert


1. Role Objective:

The engineer will be responsible for designing, developing, and optimizing advanced AI/ML models for computer vision, generative AI, Audio processing, predictive analysis and NLP applications. Must possess deep expertise in algorithm development and model deployment as production-ready products for naval applications. Also responsible for ensuring models are modular, reusable, and deployable in resource constrained environments.

2. Key Responsibilities:


2.1. Design and train models using Naval-specific data and deliver them in the form of end products


2.2. Fine-tune open-source LLMs (e.g. LLaMA, Qwen, Mistral, Whisper, Wav2Vec, Conformer models) for Navy-specific tasks.


2.3. Preprocess, label, and augment datasets.


2.4. Implement quantization, pruning, and compression for deployment-ready AI applications. 


2.5. The engineer will be responsible for the development, training, fine-tuning, and optimization of Large Language Models (LLMs) and translation models for mission-critical AI applications of the Indian Navy. The candidate must possess a strong foundation in transformer-based architectures (e.g., BERT, GPT, LLaMA, mT5, NLLB) and hands-on experience with pretraining and fine-tuning methodologies such as Supervised Fine-Tuning (SFT), Instruction Tuning, Reinforcement Learning from Human Feedback (RLHF), and Parameter-Efficient Fine-Tuning (LoRA, QLoRA, Adapters).


2.6. Proficiency in building multilingual and domain-specific translation systems using techniques like backtranslation, domain adaptation, and knowledge distillation is essential.


2.7. The engineer should demonstrate practical expertise with libraries such as Hugging Face Transformers, PEFT, Fairseq, and OpenNMT. Knowledge of model compression, quantization, and deployment on GPU-enabled servers is highly desirable. Familiarity with MLOps, version control using Git, and cross-team integration practices is expected to ensure seamless interoperability with other AI modules.


2.8. Collaborate with Backend Engineer for integration via standard formats (ONNX, TorchScript).


2.9. Generate reusable inference modules that can be plugged into microservices or edge devices.


2.10. Maintain reproducible pipelines (e.g., with MLFlow, DVC, Weights & Biases).


3. Educational Qualifications


Essential Requirements:


3.1. B Tech / M.Tech in Computer Science, AI/ML, Data Science, Statistics or related field with exceptional academic record.


3.2. Minimum 75% marks or 8.0 CGPA in relevant engineering disciplines.


Desired Specialized Certifications:


3.3. Professional ML certifications from Google, AWS, Microsoft, or NVIDIA


3.4. Deep Learning Specialization.


3.5. Computer Vision or NLP specialization certificates.


3.6. TensorFlow/ PyTorch Professional Certification.


4. Core Skills & Tools:


4.1. Languages: Python (must), C++/Rust.


4.2. Frameworks: PyTorch, TensorFlow, Hugging Face Transformers.


4.3. ML Concepts: Transfer learning, RAG, XAI (SHAP/LIME), reinforcement learning LLM finetuning, SFT, RLHF, LoRA, QLorA and PEFT.


4.4. Optimized Inference: ONNX Runtime, TensorRT, TorchScript.


4.5. Data Tooling: Pandas, NumPy, Scikit-learn, OpenCV.


4.6. Security Awareness: Data sanitization, adversarial robustness, model watermarking.


5. Core AI/ML Competencies:


5.1. Deep Learning Architectures: CNNs, RNNs, LSTMs, GRUs, Transformers, GANs, VAEs, Diffusion Models


5.2. Computer Vision: Object detection (YOLO, R-CNN), semantic segmentation, image classification, optical character recognition, facial recognition, anomaly detection.


5.3. Natural Language Processing: BERT, GPT models, sentiment analysis, named entity recognition, machine translation, text summarization, chatbot development.


5.4. Generative AI: Large Language Models (LLMs), prompt engineering, fine-tuning, Quantization, RAG systems, multimodal AI, stable diffusion models.


5.5. Advanced Algorithms: Reinforcement learning, federated learning, transfer learning, few-shot learning, meta-learning


6. Programming & Frameworks:


6.1. Languages: Python (expert level), R, Julia, C++ for performance optimization.


6.2. ML Frameworks: TensorFlow, PyTorch, JAX, Hugging Face Transformers, OpenCV, NLTK, spaCy.


6.3. Scientific Computing: NumPy, SciPy, Pandas, Matplotlib, Seaborn, Plotly


6.4. Distributed Training: Horovod, DeepSpeed, FairScale, PyTorch Lightning


7. Model Development & Optimization:


7.1. Hyperparameter tuning using Optuna, Ray Tune, or Weights & Biases etc.


7.2. Model compression techniques (quantization, pruning, distillation).


7.3. ONNX model conversion and optimization.


8. Generative AI & NLP Applications:


8.1. Intelligence report analysis and summarization.


8.2. Multilingual radio communication translation.


8.3. Voice command systems for naval equipment.


8.4. Automated documentation and report generation.


8.5. Synthetic data generation for training simulations.


8.6. Scenario generation for naval training exercises.


8.7. Maritime intelligence synthesis and briefing generation.


9. Experience Requirements


9.1. Hands-on experience with at least 2 major AI domains.


9.2. Experience deploying models in production environments.


9.3. Contribution to open-source AI projects.


9.4. Led development of multiple end-to-end AI products.


9.5. Experience scaling AI solutions for large user bases.


9.6. Track record of optimizing models for real-time applications.


9.7. Experience mentoring technical teams


10. Product Development Skills


10.1. End-to-end ML pipeline development (data ingestion to model serving).


10.2. User feedback integration for model improvement.


10.3. Cross-platform model deployment (cloud, edge, mobile)


10.4. API design for ML model integration


11. Cross-Compatibility Requirements:


11.1. Define model interfaces (input/output schema) for frontend/backend use.


11.2. Build CLI and REST-compatible inference tools.


11.3. Maintain shared code libraries (Git) that backend/frontend teams can directly call.


11.4. Joint debugging and model-in-the-loop testing with UI and backend teams

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You