Job
Description
6-8 years Noida
Responsibilities
Drive end-to-end AI application development—from idea conception and hypothesis formulation through rigorous testing to production deployment and monitoring, ensuring solutions solve real-world problems.
Embrace a boundaryless approach to technology, collaborating across backend systems, frontend interfaces (where applicable), automation scripts, and data pipelines to deliver holistic solutions.
Design, implement, and meticulously fine-tune LLM prompts, embeddings, RAG pipelines, and sophisticated AI agents (using frameworks like Langchain, CrewAI, Bedrock Agents, OpenAI SDK, Autogen etc.) tailored to specific domain use cases and business objectives.
Architect and build Generative AI solutions that are inherently scalable, secure, cost-efficient, and privacy-conscious, integrating these considerations from the initial design phase.
Implement robust testing methodologies and MLOps practices specifically for GenAI, focusing on preventing prompt drift, mitigating hallucinations, ensuring model regression testing, and maintaining high performance.
Maintain exceptionally high standards in code hygiene, implement robust CI/CD practices for AI models and applications, and produce clear, comprehensive documentation.
Collaborate closely with data analysts and engineers to guarantee data quality, readiness, and the efficiency of data pipelines feeding into GenAI models and applications.
Formulate and validate hypotheses by engaging directly with stakeholders, understanding their needs deeply, and translating them into actionable technical requirements.
Champion rapid prototyping and lean experimentation methodologies: build fast, fail fast, learn faster, and iterate quickly to test ideas before committing significant resources.
Engage proactively and cross-functionally with Product Management, Design, Business Analysts, Security, and Legal teams to navigate complex requirements and deliver impactful solutions.
Actively contribute to a vibrant culture of learning and innovation by creating tech blogs, developing internal Proofs-of-Concept (POCs), delivering demos, and conducting workshops on GenAI topics.
Stay relentlessly current with the rapidly evolving Generative AI landscape, including new trends, models, tools (like AWS Bedrock, Azure OpenAI, Vertex AI), and research advancements, critically evaluating opportunities for integration and improvement.
Provide technical leadership and mentorship to junior engineers, fostering a culture of technical excellence and continuous learning within the GenAI space.
Requirements
A minimum of 6+ years of Data Science experience, demonstrating strong expertise in building robust AI/ML systems and scalable application architectures.
At least 2+ years of focused, hands-on experience building and deploying practical Generative AI applications, such as AI virtual assistants, complex AI agents, and content generation tools.
Managed and delivered solutions for X+ clients, contributing to ₹Y+ in revenue and ensuring high client satisfaction across multiple verticals.
Spearheaded initiatives that improved operational efficiency and scalability, leading to measurable gains in revenue growth and client acquisition.
Proven experience utilizing core GenAI tools and platforms like AWS Bedrock (including various foundation models and Bedrock Agents), LangChain, designing and implementing RAG pipelines, leveraging Agent-based frameworks (e.g., CrewAI), working with vector databases (e.g., Pinecone, Weaviate, Qdrant, Milvus, ChromaDB), and understanding LLM principles. Experience with model training/fine-tuning (e.g., LoRA/QLoRA) and core NLP concepts is essential.
High proficiency in Python and strong familiarity with modern ML/AI libraries (e.g., Hugging Face Transformers, PyTorch/TensorFlow, Scikit-learn, Pandas, NumPy) and relevant SDKs (e.g., Boto3, OpenAI SDK).
Demonstrated practical experience in developing, performance tuning, and optimizing LLM-based applications, including sophisticated prompt engineering techniques.
Experience leveraging AI coding assistants (e.g., GitHub Copilot, Amazon Q, Cursor) to accelerate development workflows while maintaining code quality.
Deep understanding and practical application of data privacy principles, security best practices, and cost-aware system design, specifically within the context of AI and Generative AI applications.
Strong understanding of working effectively with both structured and unstructured data, including efficient retrieval, processing, and transformation techniques for GenAI.
Proven ability to work effectively across diverse disciplines—collaborating seamlessly with product, design, business, and other engineering teams in fast-paced, dynamic environments.
Hands-on experience developing and deploying applications on cloud platforms, with AWS being highly preferred.
Excellent analytical and problem-solving skills, coupled with a strong bias toward action, experimentation, and continuous learning.
Exceptional communication and collaboration skills, capable of articulating complex technical concepts clearly to various audiences.
What sets you apart:
Active contributions to relevant open-source GenAI tools or frameworks (e.g., LangChain, LlamaIndex, Hugging Face).
A strong portfolio showcasing personal AI side projects or significant contributions to GenAI initiatives.
Exposure to frontend technologies (e.g., React, Streamlit, Gradio) and the ability to contribute across the full stack for end-to-end feature ownership.
Experience with other cloud platforms' GenAI offerings (e.g., Azure OpenAI, Google Vertex AI).
Familiarity with advanced MLOps tools and practices tailored for LLMs.