Job
Description
We empower our people to stay resilient and relevant in a constantly changing world. We are looking for individuals who are always seeking creative ways to grow and learn, individuals who aspire to make a real impact, both now and in the future. If this resonates with you, then you would be a valuable addition to our dynamic international team. As a Graduate Trainee Engineer, you will have the opportunity to contribute significantly by: - Designing, developing, and optimizing NLP-driven AI solutions using cutting-edge models and techniques such as NER, embeddings, and summarization. - Building and operationalizing RAG pipelines and agentic workflows to facilitate intelligent, context-aware applications. - Fine-tuning, prompt-engineering, and deploying LLMs (such as OpenAI, Anthropic, Falcon, LLaMA, etc.) for specific domain use cases. - Collaborating with data scientists, backend developers, and cloud architects to construct scalable AI-first systems. - Evaluating and integrating third-party models/APIs and open-source libraries for generative use cases. - Continuously monitoring and enhancing model performance, latency, and accuracy in production environments. - Implementing observability, performance monitoring, and explainability features in deployed models. - Ensuring that solutions meet enterprise-level criteria for reliability, traceability, and maintainability. To excel in this role, you should possess: - A Master's or Bachelor's degree in Computer Science, Machine Learning, AI, or a related field. - Exposure to AI/ML, with expertise in NLP and Generative AI. - A solid understanding of LLM architectures, fine-tuning methods (such as LoRA, PEFT), embeddings, and vector search. - Previous experience in designing and deploying RAG pipelines and collaborating with multi-step agent architectures. - Proficiency in Python and frameworks like Lang Chain, Transformers (Hugging Face), Llama Index, Smol Agents, etc. - Familiarity with ML observability and explainability tools (e.g., Tru Era, Arize, Why Labs). - Knowledge of cloud-based ML services like AWS Sagemaker, AWS Bedrock, Azure OpenAI Service, Azure ML Studio, and Azure AI Foundry. - Hands-on experience in integrating LLM-based agents in production settings. - An understanding of real-time NLP challenges (streaming, latency optimization, multi-turn dialogues). - Familiarity with Lang Graph, function calling, and tools for orchestration in agent-based systems. - Exposure to infrastructure-as-code (Terraform/CDK) and DevOps for AI pipelines. - Domain knowledge in Electrification, Energy, or Industrial AI would be advantageous. Join us in Bangalore and be part of a team that is shaping the future of entire cities, countries, and beyond. At Siemens, we are a diverse community of over 312,000 minds working together to build a better tomorrow. We value equality and encourage applications from individuals who reflect the diversity of the communities we serve. Our employment decisions are based on qualifications, merit, and business requirements. Bring your curiosity and creativity to Siemens and be a part of shaping tomorrow with us. Explore more about Siemens careers at www.siemens.com/careers and discover the digital world of Siemens at www.siemens.com/careers/digitalminds.,