India
None Not disclosed
Remote
Part Time
Tech Intervu is on the hunt for a Java Content Curator who’s passionate about backend technologies and deeply familiar with technical interview processes at top tech companies. In this remote, part-time role , you’ll be responsible for creating, curating, and structuring high-quality Java interview prep content —ranging from core Java concepts to system design and real-world problem-solving. Your insights will shape the learning experience for thousands of aspiring software engineers preparing for top-tier tech interviews. What You’ll Do: Curate and develop Java interview questions , coding challenges , and topic-based learning modules . Structure content into 30-60-90 day prep plans , cheat sheets, and hands-on practice sets. Collaborate with the Tech Intervu team to align content with real-world hiring expectations. Stay updated on the latest trends in Java, backend interviews, and tech hiring. Ideal Candidate: 2+ years of experience with Java development (industry or academic). Deep understanding of Java concepts, data structures, OOP, multithreading, memory management, and performance tuning. Strong communication skills and a knack for breaking down complex topics clearly. Bonus: Experience in mentoring, teaching, or creating tech interview content. Role Type: Remote | Part-Time | Flexible Hours Perks: Work from anywhere, anytime. Contribute to a fast-growing platform shaping the future of technical interviews. Get recognized for your expertise with published content and community exposure. Join Tech Intervu and help engineers ace their Java interviews with confidence. Apply now and be part of something impactful.
karnataka
INR Not disclosed
On-site
Full Time
As a talented and driven Backend Engineer with a solid understanding of data engineering workflows, you will play a crucial role in our team. Your expertise in Python, preferably with FastAPI, along with knowledge of SQL and NoSQL databases, will be instrumental in designing robust backend services and contributing to the development of high-performance data pipelines. You will have the unique opportunity to work at the intersection of API development and data systems, where you will help build the infrastructure supporting our data-driven applications. Your responsibilities will include designing, developing, and maintaining backend services, building and consuming RESTful APIs, and working with both SQL and NoSQL databases for efficient data storage and modeling. Additionally, you will be involved in developing and managing ETL/ELT data pipelines, collaborating with cross-functional teams to integrate third-party APIs and data sources, and ensuring the scalability, performance, and reliability of backend systems. Your participation in code reviews, architectural discussions, and technical design will be invaluable to the team. To excel in this role, you should possess proficiency in Python, experience in FastAPI or similar frameworks, a strong understanding of REST API design and best practices, and hands-on experience with relational and non-relational databases. Familiarity with data engineering concepts, software engineering principles, and version control is also essential. Preferred qualifications include exposure to cloud platforms like AWS, GCP, or Azure, familiarity with containerization tools such as Docker and Kubernetes, and experience working in agile teams and CI/CD environments. If you are passionate about building scalable systems and enabling data-driven applications, we are excited to hear from you.,
karnataka
INR Not disclosed
On-site
Full Time
As a Data Scientist specializing in Natural Language Processing (NLP) and Large Language Models (LLMs), you will play a crucial role in designing, fine-tuning, and deploying cutting-edge open-source and API-based LLMs to address real-world challenges. Your primary focus will be on creating robust GenAI pipelines, innovative internal tools, and engaging client-facing applications. You will have the exciting opportunity to work at the forefront of AI technology, contributing to the advancement of intelligent systems through the utilization of Retrieval-Augmented Generation (RAG) frameworks, vector databases, and real-time inference APIs. Your responsibilities will include fine-tuning and optimizing open-source LLMs for specific business domains, constructing and managing RAG pipelines using tools like LangChain, FAISS, and ChromaDB, as well as developing LLM-powered APIs for diverse applications such as chat, Q&A, summarization, and classification. Additionally, you will be tasked with designing effective prompt templates and implementing chaining strategies to enhance LLM performance across various contexts. To excel in this role, you should possess a strong foundation in NLP fundamentals and deep learning techniques for text data, hands-on experience with LLM frameworks like Hugging Face Transformers or OpenAI APIs, and familiarity with tools such as LangChain, FAISS, and ChromaDB. Proficiency in developing REST APIs to support ML models, expertise in Python programming with knowledge of libraries like PyTorch or TensorFlow, and a solid grasp of data structures, embedding techniques, and vector search systems are also essential. Preferred qualifications include prior experience in LLM fine-tuning and evaluation, exposure to cloud-based ML deployment (AWS, GCP, Azure), and a background in information retrieval, question answering, or semantic search. If you are passionate about generative AI and eager to contribute to the latest developments in NLP and LLMs, we are excited to connect with you.,
karnataka
INR Not disclosed
On-site
Full Time
As a talented and driven Backend Engineer with a solid understanding of data engineering workflows, you will be responsible for designing robust backend services and contributing to the development and maintenance of high-performance data pipelines. Your expertise in Python (preferably with FastAPI) and working knowledge of both SQL and NoSQL databases will be crucial in this role. You will have the unique opportunity to work at the intersection of API development and data systems, helping to build the infrastructure that powers our data-driven applications. Key Responsibilities: - Design, develop, and maintain backend services using Python and FastAPI - Build and consume RESTful APIs for internal tools and external integrations - Work with SQL and NoSQL databases for efficient data storage and modeling - Develop and manage ETL/ELT data pipelines to handle structured and unstructured data - Collaborate with cross-functional teams to integrate with third-party APIs and data sources - Ensure the scalability, performance, and reliability of backend systems - Participate in code reviews, architectural discussions, and technical design Required Skills: - Proficiency in Python, with experience in FastAPI or similar frameworks - Strong understanding of REST API design and best practices - Experience working with relational (PostgreSQL, MySQL) and non-relational (MongoDB, Redis) databases - Hands-on experience in designing and managing ETL/ELT pipelines - Familiarity with data engineering concepts such as data modeling, transformations, and data integration - Solid understanding of software engineering principles and version control (Git) Preferred Qualifications: - Exposure to cloud platforms like AWS, GCP, or Azure - Familiarity with containerization tools (Docker, Kubernetes) - Experience working in agile teams and CI/CD environments If you are passionate about building scalable systems and enabling data-driven applications, we would love to hear from you.,
karnataka
INR Not disclosed
On-site
Full Time
We are seeking a dedicated Data Scientist with a strong background in Natural Language Processing (NLP) and expertise in Large Language Models (LLMs). As a part of our team, you will play a crucial role in the development, optimization, and implementation of open-source and API-based LLMs to address real-world challenges. Your primary responsibilities will revolve around constructing resilient GenAI pipelines, innovative internal tools, and customer-centric applications. This position offers you a remarkable chance to be at the forefront of Artificial Intelligence advancements and make significant contributions to the evolution of intelligent systems through the utilization of Retrieval-Augmented Generation (RAG) frameworks, vector databases, and real-time inference APIs. Your responsibilities will include fine-tuning and enhancing open-source LLMs tailored to specific business sectors, building and managing RAG pipelines utilizing tools like LangChain, FAISS, and ChromaDB, creating LLM-powered APIs for applications like chatbots, Q&A systems, summarization, and classification, as well as designing effective prompt templates and implementing chaining strategies to augment LLM performance across diverse contexts. To excel in this role, you must possess a deep understanding of NLP principles and advanced deep learning techniques for text data, hands-on experience with LLM frameworks like Hugging Face Transformers or OpenAI APIs, familiarity with tools such as LangChain, FAISS, and ChromaDB, proficiency in developing REST APIs for machine learning models, proficiency in Python along with expertise in libraries such as PyTorch or TensorFlow, and a solid grasp of data structures, embedding techniques, and vector search systems. Desirable qualifications include prior experience in LLM fine-tuning and evaluation, exposure to cloud-based ML deployment on platforms like AWS, GCP, or Azure, and a background in information retrieval, question answering, or semantic search. If you are passionate about generative AI and eager to work with cutting-edge NLP and LLM technologies, we are excited to connect with you.,
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.