Jobs
Interviews

5906 Retrieval Jobs - Page 36

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderābād

On-site

Minimum qualifications: Bachelor’s degree, or equivalent practical experience. 5 years of experience with software development in one or more programming languages (e.g., Python, C, C++, Java, JavaScript). 3 years of experience in a technical leadership role; overseeing strategic projects, with 2 years of experience in a people management, supervision/team leadership role. Preferred qualifications: Master's degree or PhD in Computer Science or related technical field. 3 years of experience working in a complex, matrixed organization. About the job Like Google's own ambitions, the work of a Software Engineer (SWE) goes way beyond just Search. SWE Managers have not only the technical expertise to take on and provide technical leadership to major projects, but also manage a team of engineers. You not only optimize your own code but make sure engineers are able to optimize theirs. As a SWE Manager you manage your project goals, contribute to product strategy and help develop your team. SWE teams work all across the company, in areas such as information retrieval, artificial intelligence, natural language processing, distributed computing, large-scale system design, networking, security, data compression, user interface design; the list goes on and is growing every day. Operating with scale and speed, our exceptional software engineers are just getting started - and as a manager, you guide the way. At Corp Eng, we build world-leading business solutions that scale a more helpful Google for everyone. As Google’s IT organization, we provide end-to-end solutions for organizations across Google. We deliver the right tools, platforms, and experiences for all Googlers as they create more helpful products and services for everyone. In the simplest terms, we are Google for Googlers. Responsibilities Set and communicate team priorities that support the broader organization's goals. Align strategy, processes, and decision-making across teams. Set clear expectations with individuals based on their level and role and aligned to the broader organization's goals. Meet regularly with individuals to discuss performance and development and provide feedback and coaching. Develop the mid-term technical vision and roadmap within the scope of your (often multiple) team(s). Evolve the roadmap to meet anticipated future requirements and infrastructure needs. Design, guide and vet systems designs within the scope of the broader area, and write product or system development code to solve ambiguous problems. Review code developed by other engineers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency). Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

5.0 years

3 - 7 Lacs

Hyderābād

On-site

Project description Information and Document Systems is a global technology change and delivery organization comprising nearly 150 individuals located in Switzerland, Poland, Singapore, United Kingdom and United States. We provide archiving and retrieval solutions to all business divisions focusing on supporting Legal, Regulatory and Operational functions. It has a complex architecture based on C-Mod, Unix, Oracle, Opentext and SAM-FS. Responsibilities Development and Improving application infrastructure, interacting with developers and production support, configuring and improving existing infrastructure, simplifying release process, investigations, research, activities related to programming and coding, taking part in planning and risk assessment, active participation in distributed agile process. Skills Must have 5+ years of working experience strong Unix Experience Extensive production support, maintenance experience Advanced scripting (Perl, PowerShell, Shell) Scripting experience Java Excellent communication, coordination and troubleshooting skills Good to have mainframe basic knowledge Good to have CMOD/IBM Content Manager OnDemand Server and Client Nice to have Agile experience Other Languages English: B2 Upper Intermediate Seniority Senior Hyderabad, IN, India Req. VR-115859 System Administration BCM Industry 11/07/2025 Req. VR-115859

Posted 1 week ago

Apply

6.0 years

20 - 30 Lacs

Hyderābād

On-site

Role: Gen AI Engineer Location: Hyderabad Mode of Work : Hybrid Notice Period : 0-25 Days Job Description: Key Responsibilities: Designing and Developing AI models: This includes creating architectures, algorithms, and frameworks for generative AI. Implementing AI models: This involves building and integrating AI models into existing systems and applications. Working with LLMs and other AI technologies: This includes using tools and techniques like LangChain, Haystack, and prompt engineering. Data preprocessing and analysis: This involves preparing data for use in AI models. Collaborating with other teams: This includes working with data scientists, product managers, and other stakeholders. Testing and deploying AI models: This involves evaluating model performance and deploying them to production environments. Monitoring and optimizing AI models: This involves tracking model performance, identifying issues, and optimizing models for better results. Staying up to date with the latest advancements in Gen AI: This includes learning about new techniques, models, and frameworks. Required Skills: Strong programming skills in Python: Python is the preferred language for AI development. Knowledge of Generative AI, NLP, and LLMs: This includes understanding the principles behind these technologies and how to use them effectively. Experience with RAG pipelines and vector databases: This includes understanding how to build and use retrieval-augmented generation pipelines. Familiarity with AI frameworks and libraries: This includes knowledge of frameworks like LangChain, Haystack, and open-source libraries. Understanding of prompt engineering and tokenization: This includes understanding how to optimize prompts and manage tokenization. Experience in integrating and fine-tuning AI models: This includes knowledge of deploying and maintaining AI models in production environments. Excellent communication and problem-solving skills: This includes the ability to communicate complex technical concepts to non-technical stakeholders. Optional Skills: Experience with cloud computing platforms (GCP, AWS, Azure): This can be helpful for deploying and managing AI models. Familiarity with MLOps practices: This can help with building and deploying AI models in a scalable and reliable manner. Experience with DevOps practices: This can help with automating the development and deployment of AI models. Job Type: Permanent Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Schedule: Day shift Experience: Total: 6 years (Required) GenAI : 3 years (Required) Python : 3 years (Required) OpenAI, Claude, Gemini : 4 years (Required) Azure : 3 years (Required) Work Location: In person

Posted 1 week ago

Apply

4.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: TensorFlow, PyTorch, rag, LangChain Forbes Advisor is Looking for: Location - Remote (For candidate's from Chennai or Mumbai it's hybrid) Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Extraction Team is a brand-new team who plays a crucial role in our organization by designing, implementing, and overseeing advanced web scraping frameworks. Their core function involves creating and refining tools and methodologies to efficiently gather precise and meaningful data from a diverse range of digital platforms. Additionally, this team is tasked with constructing robust data pipelines and implementing Extract, Transform, Load (ETL) processes. These processes are essential for seamlessly transferring the harvested data into our data storage systems, ensuring its ready availability for analysis and utilization. A typical day in the life of a Data Research Engineer will involve coming up with ideas regarding how the company/team can best harness the power of AI/LLM, and use it not only simplify operations within the team, but also to streamline the work of the research team in gathering/retrieving large sets of data. The role is that of a leader who sets a vision for the future of AI/LLM’s use within the team and the company. They think outside the box and are proactive in engaging with new technologies and developing new ideas for the team to move forward in the AI/LLM field. The candidate should also at least be willing to acquire some basic skills in scraping and data pipelining. Responsibilities: Develop methods to leverage the potential of LLM and AI within the team. Proactive at finding new solutions to engage the team with AI/LLM, and streamline processes in the team. Be a visionary with AI/LLM tools and predict how the use of future technologies could be harnessed early on so that when these technologies come out, the team is ahead of the game regarding how it could be used. Assist in acquiring and integrating data from various sources, including web crawling and API integration. Stay updated with emerging technologies and industry trends. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Contribute to cross-functional teams in understanding data requirements. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Data Science, or a related field. Higher qualifications is a plus. Think proactively and creatively regarding the next AI/LLM technologies and how to use them to the team’s and company’s benefits. “Think outside the box” mentality. Experience prompting LLMs in a streamlined way, taking into account how the LLM can potentially “hallucinate” and return wrong information. Experience building agentic AI platforms with modular capabilities and autonomous task execution. (crewai, lagchain, etc.) Proficient in implementing Retrieval-Augmented Generation (RAG) pipelines for dynamic knowledge integration. (chromadb, pinecone, etc) Experience managing a team of AI/LLM experts is a plus: this includes setting up goals and objectives for the team and fine-tuning complex models. Strong proficiency in Python programming Proficiency in SQL and data querying is a plus. Familiarity with web crawling techniques and API integration is a plus but not a must. Experience in AI/ML engineering and data extraction Experience with LLMs, NLP frameworks (spaCy, NLTK, Hugging Face, etc.) Strong understanding of machine learning frameworks (TensorFlow, PyTorch) Design and build AI models using LLMs Integrate LLM solutions with existing systems via APIs Collaborate with the team to implement and optimize AI solutions Monitor and improve model performance and accuracy Familiarity with Agile development methodologies is a plus. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Ability to work collaboratively in a team environment. Good and effective communication skills. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: TensorFlow, PyTorch, rag, LangChain Forbes Advisor is Looking for: Location - Remote (For candidate's from Chennai or Mumbai it's hybrid) Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Extraction Team is a brand-new team who plays a crucial role in our organization by designing, implementing, and overseeing advanced web scraping frameworks. Their core function involves creating and refining tools and methodologies to efficiently gather precise and meaningful data from a diverse range of digital platforms. Additionally, this team is tasked with constructing robust data pipelines and implementing Extract, Transform, Load (ETL) processes. These processes are essential for seamlessly transferring the harvested data into our data storage systems, ensuring its ready availability for analysis and utilization. A typical day in the life of a Data Research Engineer will involve coming up with ideas regarding how the company/team can best harness the power of AI/LLM, and use it not only simplify operations within the team, but also to streamline the work of the research team in gathering/retrieving large sets of data. The role is that of a leader who sets a vision for the future of AI/LLM’s use within the team and the company. They think outside the box and are proactive in engaging with new technologies and developing new ideas for the team to move forward in the AI/LLM field. The candidate should also at least be willing to acquire some basic skills in scraping and data pipelining. Responsibilities: Develop methods to leverage the potential of LLM and AI within the team. Proactive at finding new solutions to engage the team with AI/LLM, and streamline processes in the team. Be a visionary with AI/LLM tools and predict how the use of future technologies could be harnessed early on so that when these technologies come out, the team is ahead of the game regarding how it could be used. Assist in acquiring and integrating data from various sources, including web crawling and API integration. Stay updated with emerging technologies and industry trends. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Contribute to cross-functional teams in understanding data requirements. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Data Science, or a related field. Higher qualifications is a plus. Think proactively and creatively regarding the next AI/LLM technologies and how to use them to the team’s and company’s benefits. “Think outside the box” mentality. Experience prompting LLMs in a streamlined way, taking into account how the LLM can potentially “hallucinate” and return wrong information. Experience building agentic AI platforms with modular capabilities and autonomous task execution. (crewai, lagchain, etc.) Proficient in implementing Retrieval-Augmented Generation (RAG) pipelines for dynamic knowledge integration. (chromadb, pinecone, etc) Experience managing a team of AI/LLM experts is a plus: this includes setting up goals and objectives for the team and fine-tuning complex models. Strong proficiency in Python programming Proficiency in SQL and data querying is a plus. Familiarity with web crawling techniques and API integration is a plus but not a must. Experience in AI/ML engineering and data extraction Experience with LLMs, NLP frameworks (spaCy, NLTK, Hugging Face, etc.) Strong understanding of machine learning frameworks (TensorFlow, PyTorch) Design and build AI models using LLMs Integrate LLM solutions with existing systems via APIs Collaborate with the team to implement and optimize AI solutions Monitor and improve model performance and accuracy Familiarity with Agile development methodologies is a plus. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Ability to work collaboratively in a team environment. Good and effective communication skills. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

5.0 - 7.0 years

8 - 10 Lacs

Thiruvananthapuram

On-site

5 - 7 Years 1 Opening Trivandrum Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: UST is seeking a highly skilled and motivated Lead Data Engineer to join our Telecommunications vertical, leading impactful data engineering initiatives for US-based Telco clients. The ideal candidate will have 6–8 years of experience in designing and developing scalable data pipelines using Snowflake, Azure Data Factory, Azure Databricks. Proficiency in Python, PySpark, and advanced SQL is essential, with a strong focus on query optimization, performance tuning, and cost-effective architecture. A solid understanding of data integration, real-time and batch processing, and metadata management is required, along with experience in building robust ETL/ELT workflows. Candidates should demonstrate a strong commitment to data quality, validation, and consistency, with working knowledge of data governance, RBAC, encryption, and compliance frameworks considered a plus. Familiarity with Power BI or similar BI tools is also advantageous, enabling effective data visualization and storytelling. The role demands the ability to work in a dynamic, fast-paced environment, collaborating closely with stakeholders and cross-functional teams while also being capable of working independently. Strong communication skills and the ability to coordinate across multiple teams and stakeholders are critical for success. In addition to technical expertise, the candidate should bring experience in solution design and architecture planning, contributing to scalable and future-ready data platforms. A proactive mindset, eagerness to learn, and adaptability to the rapidly evolving data engineering landscape—including AI integration into data workflows—are highly valued. This is a leadership role that involves mentoring junior engineers, fostering innovation, and driving continuous improvement in data engineering practices. Skills Azure Databricks,Snowflake,python,Data Engineering About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Vellore, Tamil Nadu, India

Remote

Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: TensorFlow, PyTorch, rag, LangChain Forbes Advisor is Looking for: Location - Remote (For candidate's from Chennai or Mumbai it's hybrid) Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Extraction Team is a brand-new team who plays a crucial role in our organization by designing, implementing, and overseeing advanced web scraping frameworks. Their core function involves creating and refining tools and methodologies to efficiently gather precise and meaningful data from a diverse range of digital platforms. Additionally, this team is tasked with constructing robust data pipelines and implementing Extract, Transform, Load (ETL) processes. These processes are essential for seamlessly transferring the harvested data into our data storage systems, ensuring its ready availability for analysis and utilization. A typical day in the life of a Data Research Engineer will involve coming up with ideas regarding how the company/team can best harness the power of AI/LLM, and use it not only simplify operations within the team, but also to streamline the work of the research team in gathering/retrieving large sets of data. The role is that of a leader who sets a vision for the future of AI/LLM’s use within the team and the company. They think outside the box and are proactive in engaging with new technologies and developing new ideas for the team to move forward in the AI/LLM field. The candidate should also at least be willing to acquire some basic skills in scraping and data pipelining. Responsibilities: Develop methods to leverage the potential of LLM and AI within the team. Proactive at finding new solutions to engage the team with AI/LLM, and streamline processes in the team. Be a visionary with AI/LLM tools and predict how the use of future technologies could be harnessed early on so that when these technologies come out, the team is ahead of the game regarding how it could be used. Assist in acquiring and integrating data from various sources, including web crawling and API integration. Stay updated with emerging technologies and industry trends. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Contribute to cross-functional teams in understanding data requirements. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Data Science, or a related field. Higher qualifications is a plus. Think proactively and creatively regarding the next AI/LLM technologies and how to use them to the team’s and company’s benefits. “Think outside the box” mentality. Experience prompting LLMs in a streamlined way, taking into account how the LLM can potentially “hallucinate” and return wrong information. Experience building agentic AI platforms with modular capabilities and autonomous task execution. (crewai, lagchain, etc.) Proficient in implementing Retrieval-Augmented Generation (RAG) pipelines for dynamic knowledge integration. (chromadb, pinecone, etc) Experience managing a team of AI/LLM experts is a plus: this includes setting up goals and objectives for the team and fine-tuning complex models. Strong proficiency in Python programming Proficiency in SQL and data querying is a plus. Familiarity with web crawling techniques and API integration is a plus but not a must. Experience in AI/ML engineering and data extraction Experience with LLMs, NLP frameworks (spaCy, NLTK, Hugging Face, etc.) Strong understanding of machine learning frameworks (TensorFlow, PyTorch) Design and build AI models using LLMs Integrate LLM solutions with existing systems via APIs Collaborate with the team to implement and optimize AI solutions Monitor and improve model performance and accuracy Familiarity with Agile development methodologies is a plus. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Ability to work collaboratively in a team environment. Good and effective communication skills. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

25.0 years

2 - 4 Lacs

Cochin

On-site

Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview We are seeking a Full Stack Developer with minimum of 5+ years of experience in Python, React, and AI/ML, who also has hands-on experience with application hosting on cloud platforms (VMs, App Services, Containers). This is a lead role where you will guide a team of 5 developers and work on building and deploying modern, intelligent web applications using AWS, Azure, and scalable backend/frontend architecture. Responsibilities: Lead a team of 5 engineers across backend, frontend, and AI/ML components. Design and develop scalable full stack solutions using Python (FastAPI/Django/Flask) and React.js. Deploy and host applications using VMs (EC2, Azure VMs), App Services, and Containers (Docker/K8s). Integrate and operationalize ML/LLM models into production systems. Own infrastructure setup for CI/CD, application monitoring, and secure deployments. Collaborate cross-functionally with data scientists, DevOps engineers, and business stakeholders. Conduct code reviews, lead sprint planning, and ensure delivery velocity. - Tech Stack & Tools: Frontend: React, Redux, Tailwind CSS / Material UI Backend: Python (FastAPI/Django/Flask), REST APIs AI/ML: scikit-learn, TensorFlow, PyTorch, Hugging Face, LangChain LLM : Azure Open AI , Cohere Cloud: o AWS: EC2, Lambda, S3, RDS, SageMaker, EKS, Elastic Beanstalk o Azure: App Services, AKS, Azure ML, Azure Functions, Azure VMs LLM: o OpenAI / Azure OpenAI (GPT-4, GPT-3.5) o Hugging Face Transformers o LangChain / LlamaIndex / Haystack o Vector DBs: Croma , Pinecone, FAISS, Weaviate, Qdrant o RAG (Retrieval Augmented Generation) pipelines App Hosting: VMs (EC2, Azure VMs), Azure App Services, Docker, Kubernetes Database: PostgreSQL, MongoDB, Redis DevOps: GitHub Actions, Jenkins, Terraform (optional), Monitoring (e.g., Prometheus, Azure Monitor) Tools: Git, Jira, Confluence, Slack - Key Requirements: 5–8 years of experience in full stack development with Python and React Proven experience in deploying and managing applications on VMs, App Services, Docker/Kubernetes Strong cloud experience on both AWS and Azure platforms Solid understanding of AI/ML integration into web apps (end-to-end lifecycle) Experience leading small engineering teams and delivering high-quality products Strong communication, collaboration, and mentoring skills LLM and Generative AI exposure (OpenAI, Azure OpenAI, RAG pipelines) Familiarity with vector search engines Microservices architecture and message-driven systems (Kafka/Event Grid) Security-first mindset and hands-on with authentication/authorization flows Lead Full Stack Developer – Python, React, AI/ML Location: Kochi Experience: 5+ years Team Leadership: Yes, team of 5 developers Employment Type: Full-time Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.

Posted 1 week ago

Apply

3.0 - 4.0 years

1 - 7 Lacs

India

On-site

Job Title: Python Backend Developer (Data Layer) Location: Mohali, Punjab Company: RevClerx About RevClerx: RevClerx Pvt. Ltd., founded in 2017 and based in the Chandigarh/Mohali area (India), is a dynamic Information Technology firm providing comprehensive IT services with a strong focus on client-centric solutions. As a global provider, we cater to diverse business needs including website designing and development, digital marketing, lead generation services (including telemarketing and qualification), and appointment setting. Job Summary: We are seeking a skilled Python Backend Developer with a strong passion and proven expertise in database design and implementation. This role requires 3-4 years of backend development experience, focusing on building robust, scalable applications and APIs. The ideal candidate will not only be proficient in Python and common backend frameworks but will possess significant experience in designing, modeling, and optimizing various database solutions, including relational databases (like PostgreSQL) and, crucially, graph databases (specifically Neo4j). You will play a vital role in architecting the data layer of our applications, ensuring efficiency, scalability, and the ability to handle complex, interconnected data. Key Responsibilities: ● Design, develop, test, deploy, and maintain scalable and performant Python-based backend services and APIs. ● Lead the design and implementation of database schemas for relational (e.g., PostgreSQL) and NoSQL databases, with a strong emphasis on Graph Databases (Neo4j). ● Model complex data relationships and structures effectively, particularly leveraging graph data modeling principles where appropriate. ● Write efficient, optimized database queries (SQL, Cypher, potentially others). ● Develop and maintain data models, ensuring data integrity, consistency, and security. ● Optimize database performance through indexing strategies, query tuning, caching mechanisms, and schema adjustments. ● Collaborate closely with product managers, frontend developers, and other stakeholders to understand data requirements and translate them into effective database designs. ● Implement data migration strategies and scripts as needed. ● Integrate various databases seamlessly with Python backend services using ORMs (like SQLAlchemy, Django ORM) or native drivers. ● Write unit and integration tests, particularly focusing on data access and manipulation logic. ● Contribute to architectural decisions, especially concerning data storage, retrieval, and processing. ● Stay current with best practices in database technologies, Python development, and backend systems. Minimum Qualifications: ● Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field, OR equivalent practical experience. ● 3-4 years of professional software development experience with a primary focus on Python backend development. ● Strong proficiency in Python and its standard libraries. ● Proven experience with at least one major Python web framework (e.g., Django, Flask, FastAPI). ● Demonstrable, hands-on experience designing, implementing, and managing relational databases (e.g., PostgreSQL). ● Experience with at least one NoSQL database (e.g., MongoDB, Redis, Cassandra). ● Solid understanding of data structures, algorithms, and object-oriented programming principles. ● Experience designing and consuming RESTful APIs. ● Proficiency with version control systems, particularly Git. ● Strong analytical and problem-solving skills, especially concerning data modeling and querying. ● Excellent communication and teamwork abilities. Preferred (Good-to-Have) Qualifications: ● Graph Database Expertise: ○ Significant, demonstrable experience designing and implementing solutions using Graph Databases (Neo4j strongly preferred). ○ Proficiency in graph query languages, particularly Cypher. ○ Strong understanding of graph data modeling principles, use cases (e.g., recommendation engines, fraud detection, knowledge graphs, network analysis), and trade-offs. ● Advanced Database Skills: ○ Experience with database performance tuning and monitoring tools. ○ Experience with Object-Relational Mappers (ORMs) like SQLAlchemy or Django ORM in depth. ○ Experience implementing data migration strategies for large datasets. ● Cloud Experience: Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud Platform) and their managed database services (e.g., RDS, Aurora, Neptune, DocumentDB, MemoryStore). ● Containerization & Orchestration: Experience with Docker and Kubernetes. ● Asynchronous Programming: Experience with Python's asyncio and async frameworks. ● Data Pipelines: Familiarity with ETL processes or data pipeline tools (e.g., Apache Airflow). ● Testing: Experience writing tests specifically for database interactions and data integrity. What We Offer: ● Challenging projects with opportunities to work on cutting-edge technologies especially in the field of AI. ● Competitive salary and comprehensive benefits package. ● Opportunities for professional development and learning (e.g., conferences, courses, certifications). ● A collaborative, innovative, and supportive work environment. How to Apply: Interested candidates are invited to submit their resume and a cover letter outlining their relevant experience, specifically highlighting their database design expertise (including relational, NoSQL, and especially Graph DB/Neo4j experience) to Job Type: Full-time Pay: ₹14,154.00 - ₹65,999.72 per month Benefits: Food provided Health insurance Location Type: In-person Schedule: Morning shift Work Location: In person

Posted 1 week ago

Apply

12.0 years

4 - 9 Lacs

Gurgaon

On-site

We are looking for a Principal Technical Consultant – Data Engineering & AI who can lead modern data and AI initiatives end-to-end — from enterprise data strategy to scalable AI/ML solutions and emerging Agentic AI systems. This role demands deep expertise in cloud-native data architectures, advanced machine learning, and AI solution delivery, while also staying at the frontier of technologies like LLMs, RAG pipelines, and AI agents. You’ll work with C-level clients to translate AI opportunities into engineered outcomes. Roles and Responsibilities AI Solution Architecture & Delivery: Design and implement production-grade AI/ML systems, including predictive modeling, NLP, computer vision, and time-series forecasting. Architect and operationalize end-to-end ML pipelines using MLflow, SageMaker, Vertex AI, or Azure ML — covering feature engineering, training, monitoring, and CI/CD. Deliver retrieval-augmented generation (RAG) solutions combining LLMs with structured and unstructured data for high-context enterprise use cases. Data Platform & Engineering Leadership: Build scalable data platforms with modern lakehouse patterns using: Ingestion: Kafka, Azure Event Hubs, Kinesis Storage & Processing: Delta Lake, Iceberg, Snowflake, BigQuery, Spark, dbt Workflow Orchestration: Airflow, Dagster, Prefect Infrastructure: Terraform, Kubernetes, Docker, CI/CD pipelines Implement observability and reliability features into data pipelines and ML systems. Agentic AI & Autonomous Workflows (Emerging Focus): Explore and implement LLM-powered agents using frameworks like LangChain, Semantic Kernel, AutoGen, or CrewAI. Develop prototypes of task-oriented AI agents capable of planning, tool use, and inter-agent collaboration for domains such as operations, customer service, or analytics automation. Integrate agents with enterprise tools, vector databases (e.g., Pinecone, Weaviate), and function-calling APIs to enable context-rich decision making. Governance, Security, and Responsible AI: - Establish best practices in data governance, access controls, metadata management, and auditability. Ensure compliance with security and regulatory requirements (GDPR, HIPAA, SOC2). Champion Responsible AI principles including fairness, transparency, and safety. Consulting, Leadership & Practice Growth: Lead large, cross-functional delivery teams (10–30+ FTEs) across data, ML, and platform domains. Serve as a trusted advisor to clients’ senior stakeholders (CDOs, CTOs, Heads of AI). Mentor internal teams and contribute to the development of accelerators, reusable components, and thought leadership. Key Skills 12+ years of experience across data platforms, AI/ML systems, and enterprise solutioning Cloud-native design experience on Azure, AWS, or GCP Expert in Python, SQL, Spark, ML frameworks (scikit-learn, PyTorch, TensorFlow) Deep understanding of MLOps, orchestration, and cloud AI tooling Hands-on with LLMs, vector DBs, RAG pipelines, and foundational GenAI principles Strong consulting acumen: client engagement, technical storytelling, stakeholder alignment Qualifications Master’s or PhD in Computer Science, Data Science, or AI/ML Certifications: Azure AI-102, AWS ML Specialty, GCP ML Engineer, or equivalent Exposure to agentic architectures, LLM fine-tuning, or multi-agent collaboration frameworks Experience with open-source contributions, conference talks, or whitepapers in AI/Data

Posted 1 week ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

Vijay Nagar, Indore, Madhya Pradesh

On-site

Hiring For AI Enginner - Python Developer :- Job Description:- We are seeking a talented Python Developer with hands-on experience in AI chatbot development and familiarity with Model Context Protocol (MCP) to join our AI team. You will be responsible for developing intelligent, context-aware conversational systems that integrate seamlessly with our internal knowledge base and enterprise services. The ideal candidate is technically proficient ,proactive, and capable of translating complex AI interactions into scalable backend solutions. Key Responsibilities 1. Design and develop robust AI chatbots using Python and integrate them with LLM APIs(e.g., OpenAI, Google AI, etc.). 2. Implement and manage Model Context Protocol (MCP) for optimize context injection, session management, and model-aware interactions. 3.Build and maintain secure pipelines for knowledge base access that allow the chatbot to accurately respond to internal queries. 4.Work with internal teams to define and evolve the contextual metadata strategy (roles, user state, query history, etc.). 5.Contribute to internal tooling and framework development for contextual AI applications. Required Skills & Experience 1. 3+ years of professional Python development experience. 2. Proven track record in AI chatbot development, particularly using LLMs. 3. Understanding of Model Context Protocol (MCP) and its role in enhancing AI interactionfidelity and relevance. 4. Strong experience integrating with AI APIs (e.g., OpenAI, Azure OpenAI). 5. Familiarity with Retrieval-Augmented Generation (RAG) pipelines and vector-basedsearch (e.g., Pinecone, Weaviate, FAISS). 6. Experience designing systems that ingest and structure unstructured knowledge (e.g., PDF,Confluence, Google Drive docs). 7. Comfortable working with RESTful APIs, event-driven architectures, and context-awareservices.8.Good understanding of data handling, privacy, and security standards related to enterpriseAI use. Job Location: Indore Joining: Immediate Share resume at talent@jstechalliance.com or can Contact Here :- 0731-3122400 WhatsApp : 8224006397 Job Type: Full-time Pay: ₹13,378.21 - ₹58,556.85 per month Application Question(s): Immediate Joiner Have you completed your Bachelor's\Master's Degree? Experience: Python: 3 years (Required) Model Context Protocol (MCP): 3 years (Required) LLM APIs: 3 years (Required) Artificial Intelligence: 2 years (Required) Location: Vijay Nagar, Indore, Madhya Pradesh (Required) Work Location: In person

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Overview The Data Lead for the AMEA (Asia, Middle East, and Africa) and India region is a strategic leadership position responsible for overseeing data management, data governance, data analytics, and data strategy initiatives across the region. Reporting to the CIO of AMEA & India, the Data Lead role will work closely with the GBUs (Global Business Units) and support functions to ensure effective and ethical use of data to drive business growth, operational efficiency, and informed decision-making. This role requires a visionary leader with deep expertise in data science, data architecture, and data governance, as well as strong leadership and communication skills. Key Responsibilities Data Strategy and Governance Develop and implement a comprehensive data strategy aligned with Group data strategy and business objectives and growth plans of AMEA & India region. Implement Group Data Policy across AMEA & India region. Establish data governance policies to ensure data quality, privacy, and security across all data assets. Collaborate with regional and global stakeholders to harmonize data standards and practices across the AMEA organization. Oversee development and maintenance of data architecture and infrastructure, ensuring scalability and robustness. Monitor regulatory compliance related to data privacy and security, and ensure adherence to relevant laws and regulations. Data Management Lead design, implementation, and management of data management systems and processes, including data warehousing, data lakes, and data integration platforms. Ensure the accurate and timely collection, storage, and retrieval of data from diverse sources across the AMEA region. Implement best practices for data lifecycle management, including data retention, archiving, and disposal. Managing the regional data team, including data analysts, data scientists, and data engineers, ensuring that they are aligned with the organization's data strategy and goals. Ensuring that the region's data is collected, stored, and analyzed in compliance with data privacy laws and regulations. Identifying and prioritizing data-related opportunities and risks within the region, and collaborating with other executives and business leaders to develop data-driven solutions. Fostering a data culture within the region by educating and training employees on effective data use and promoting interdepartmental collaboration. Ensure the digital & data integration of new acquired companies and the data & digital disintegration of sold companies Data Analytics and Insights Drive development and deployment of advanced analytics and business intelligence solutions to support data-driven decision-making. Lead a team of data scientists, analysts, and engineers to generate actionable insights from data, enabling business leaders to make informed decisions. Foster a culture of data literacy and data-driven innovation across the organization. Leadership and Collaboration Provide visionary leadership to the data team, setting clear goals, expectations, and performance metrics. Collaborate with senior executives and business leaders within the GBUs and support functions to identify data-driven opportunities and challenges. Collaborate with the entities Data Leads to ensure consistency in data policies, standards, and procedures across the organization. Keeping up-to-date with the latest trends and technologies in the data field, and identifying opportunities to leverage emerging technologies to improve data-driven decision-making in the region. Build and maintain strong relationships with external partners, vendors, and industry experts to stay abreast of emerging trends and technologies. Qualifications Master's degree in Data Science, Computer Science, Information Technology, or a related field. At least 10 years of experience in data management, data analytics, or a related field, with a minimum of 5 years in a senior leadership role. Proven track record of developing and executing data strategies that drive business value. In-depth knowledge of data governance, data architecture, data security, and regulatory compliance. Strong expertise in data analytics, machine learning, and AI. Excellent leadership, communication, and interpersonal skills. Ability to work effectively in a diverse and multicultural environment. Skills and Competencies Strategic Vision: Ability to develop and articulate a clear vision for data strategy and governance. Technical Expertise: Proficiency in data management technologies, analytics tools, and programming languages. Leadership: Strong leadership skills with the ability to inspire and motivate a high-performing data team. Communication: Excellent communication skills, both verbal and written, with the ability to convey complex data concepts to non-technical stakeholders. Collaboration: Proven ability to work collaboratively with cross-functional teams and external partners. Problem-Solving: Strong analytical and problem-solving skills, with a focus on data-driven solutions. Analytical skills: Ability to analyze large amounts of data to identify patterns, trends, and insights that can be used to drive business decisions. Strategic thinking: Ability to develop and implement a comprehensive data strategy that aligns with the organization's goals and objectives. Leadership skills: An effective leader who can manage and motivate a team of data professionals and collaborate effectively with other executives and business leaders. Communication skills: Ability to communicate complex data-related concepts and insights to non-technical stakeholders in a clear and concise manner. Change management skills: Ability to manage and lead organizational change related to data strategy, policies, and processes. Business acumen: Have a strong understanding of business and industry in which the organization operates, and ability to use data to drive business outcomes. Reports to: CIO of AMEA & India Location : Pune, India Business Unit: GBU Renewables Division: T&G AMEA - India Legal Entity: ENGIE Energy India Private Limited Professional Experience: Senior (experience >15 years) Education Level: Master's Degree

Posted 1 week ago

Apply

2.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Role : AI Developer - Agentic AI Exp: 2-3 Years Work Mode: 12- 10 pm, Onsite( Mohali, Punjab) Job Role & Responsibilities Design, develop, and deploy Agentic AI systems capable of autonomous task execution by integrating reasoning, memory, and tool use to enable intelligent behavior across complex, multi-step workflows. Architect intelligent agents that can dynamically interact with APIs, data sources, and third-party tools to accomplish diverse objectives with minimal human intervention. Optimize performance of agentic frameworks by enhancing model accuracy, minimizing response latency, and ensuring scalability and reliability in real-world applications. Develop reusable, testable, and production-grade code , adhering to best practices in software engineering and modern AI development workflows. Collaborate with cross-functional teams , including product managers, designers, and backend engineers, to convert business requirements into modular agent behaviors. Integrate Retrieval-Augmented Generation (RAG) , advanced NLP techniques, and knowledge graph structures to improve decision-making and contextual awareness of agents. Conduct rigorous profiling, debugging, and performance testing of agent workflows to identify bottlenecks and improve runtime efficiency. Write and maintain comprehensive unit, integration, and regression tests to validate agent functionality and ensure robust system performance. Continuously enhance codebases , refactor existing modules, and adopt new design patterns to accommodate evolving agentic capabilities and improve maintainability. Implement secure, fault-tolerant, and privacy-compliant designs to ensure that deployed agentic systems meet enterprise-grade reliability and data protection standards. Qualification Required: Bachelor's degree in computer science , or related field. Specialization or Certification in AI or ML is a plus. Technical Expertise: 2+ years of hands-on experience in AI/ML/DL projects, with a strong emphasis on Natural Language Processing (NLP) , Named Entity Recognition (NER) , and Text Analytics . Proven ability to design and deploy Agentic AI systems -autonomous, goal-oriented agents that exhibit reasoning, memory retention, tool use, and execution of multi-step tasks. Practical expertise in agent architecture , task decomposition, and seamless integration with external APIs, databases, and tools to enhance agent capabilities. Skilled in agent prompting strategies , including dynamic prompt chaining and context management, to guide language models through intelligent decision-making workflows. Experience with Retrieval-Augmented Generation (RAG) pipelines and generative AI , with a strong focus on optimizing NLP models for low-latency, high-accuracy production use. Solid foundation in deep learning methods , recommendation engines, and AI applications within HR or similar domains. Exposure to Reinforcement Learning (RL) frameworks and holds relevant certifications or specializations in Artificial Intelligence , showcasing continuous learning and depth in the field. Minimum skills we look for: Skills & Expertise (with Agentic AI focus) Proven experience in building Agentic AI systems , including autonomous agents capable of multi-step reasoning, memory management, and tool use. Expertise in agent design patterns , task decomposition , dynamic planning, and decision-making logic using LLMs. Skilled in integrating multi-agent coordination , goal-setting, and feedback loops to create adaptive, evolving agent behavior. Strong command over prompt engineering , contextual memory structuring , and tool calling mechanisms within LLM-powered agent workflows. Proficiency in managing agent memory (short-term, long-term, episodic) using vector databases and custom memory stores. Ability to build autonomous task execution pipelines with minimal human input, combining language models, APIs, and third-party tools. Experience with frameworks and orchestration for agent behavior tracing, logging, and failure recovery . Tools & Technologies – Agentic AI Agentic Frameworks : LangChain, CrewAI, AutoGen, AutoGPT, BabyAGI – for building, managing, and orchestrating intelligent agents. LLM APIs : OpenAI (GPT-4/3.5), Anthropic (Claude), Cohere, Hugging Face Transformers. Memory & Vector Databases : FAISS, Weaviate, Pinecone, Chroma – for embedding-based agent memory and contextual retrieval. Prompt Management Tools : PromptLayer, LangSmith – for testing, evaluating, and refining agent prompts and traces. RAG & Context Enrichment : LangChain RAG pipelines, Haystack, Milvus. Autonomy Infrastructure : Docker, FastAPI, Redis, Celery – for building scalable agent runtimes. Observability : OpenTelemetry, Langfuse (or similar) for tracing agent decisions, failures, and success metrics. Testing Agentic Behavior : Integration with PyTest + mock APIs/tools to validate autonomous decision logic and fallback strategies.

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai

On-site

Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. 1. Support the development and maintenance of business intelligence and analytics systems to support data-driven decision-making. 2. Implement of business intelligence and analytics systems, ensuring alignment with business requirements. 3. Design and optimize data warehouse architecture to support efficient storage and retrieval of large datasets. 4. Enable self-service data exploration capabilities for users to analyze and visualize data independently. 5. Develop reporting and analysis applications to generate insights from data for business stakeholders. 6. Design and implement data models to organize and structure data for analytical purposes. 7. Implement data security and federation strategies to ensure the confidentiality and integrity of sensitive information. 8. Optimize business intelligence production processes and adopt best practices to enhance efficiency and reliability. 9. Assist in training and support to users on business intelligence tools and applications. 10. Collaborate and maintain relationships with vendors and oversee project management activities to ensure timely and successful implementation of business intelligence solutions. Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Relevant work experience in data engineering based on the following number of years: Standard I: Two (2) years Standard II: Three (3) years Senior I: Four (4) years Senior II: Five (5) years Knowledge, Skills and Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.

Posted 1 week ago

Apply

0 years

2 Lacs

India

On-site

Role & responsibilities Job Responsibilities Ensure the newly installed equipment is working as per the specification provided. Upkeep all equipment to its proper and uninterrupted working performance. Coordinate with the service provider on time while on warranty period for uninterrupted Ensure patient safety through periodical safety checks. Ensure patient safety through periodical safety checks. Ensure the retrieval possibilities of condemned items. Updating the knowledge, latest technology by attending seminar, conference etc. Identifying the critical equipment and maintaining the back up. Maintaining the AMC, CMC etc updated. Help to implement the NABH STANDARDS in the department. Participate Materio-vigilance programme of India (MvPI). Help to the Prepare department QI Work with medical staff and analyses their technical problems and decide which engineered equipment or technical services might improve their situation. Inspect completed installations and observe operations, to ensure conformance to design and equipment specifications and compliance with operational and safety standards. Involves and participates hospital continual quality improvement process. Participates and contributes to the educational activities in the service. Participates and contributes to all the Quality Improvement activities of the service. Adheres to the rules and regulations of the SRH and assures that work performed is in keeping with the established standards of the institution. Performs any other tasks and duties appropriate to his/her real of knowledge, skills and experience as required by the management. Preparation of Periodic Preventive Maintenance plan and effective implementation. Co-ordinates with external service engineers when coming for equipment service (both preventive and breakdown service i.e, AMC & CMC). Co-ordinate for timely completion of breakdown work order & submission of closure report. Coordinate to reduce the TAT for breakdown for critical equipments. Effective monitoring of returnable equipments and ensure all are received. Installing, troubleshooting, repairing and performing preventive maintenance on all medical instrumentation. Responsible for the periodic preventive maintenance and repair of all systems within the medical facilities. Providing on-site training to physician, nurses and other supportive staff on operation maintenance and limited troubleshooting. Responsible for delivering both timely and effective repairs as well as adequate training of operators to ensure optimal system performance, resulting in internal customer satisfaction. Preferred candidate profile Must have Bachelor Degree in Biomedical Engineering BE / B.Tech (Biomedical) and work experience in a Multispecialty Hospital Must have minimum 2+ yrs of Experience in Biomedical Engineering. Must have an Ability to maintain OT and ICU equipment's. Accurate Documentation skills. Good Communication & Liaison skills. Job Type: Full-time Pay: From ₹18,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person Expected Start Date: 12/07/2025

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Chennai

On-site

Job Information Date Opened 06/30/2025 Job Type Full time Industry Software Product City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600017 Job Description Pando (www.pando.ai) is pioneering the future of autonomous logistics with innovative AI capabilities. Trusted by Fortune 500 enterprises with global customers across North America, Europe, and Asia Pacific regions, we are leading the global disruption of supply chain software, with our AI-powered, no-code, & unified platform empowering Autonomous Supply Chain®. We have been recognized by Gartner for our transportation management capabilities, by the World Economic Forum (WEF) as a Technology Pioneer, by G2 as a Market Leader in Freight Management, and named one of the fastest-growing technology companies by Deloitte. Why Pando? We are one of the fastest growing companies reimagining supply chain and logistics for Manufacturers & Retailers scaling up globally. We are a growing team, unrelenting and enthusiastic about building great products. We have folks who are pragmatic, imaginative or a quirky combination of both. We yearn for purpose in our work & support each other to grow. Role We are looking for a capable python developer with 2-3yrs experience to optimize our web-based application performance. You will be collaborating with our application developers, designing back- end components, and integrating data storage, 3rd party service providers with the web application and related solutions. Responsibilities Develop, test, and maintain efficient, reusable, and reliable Python code using Django framework. Design and implement data models and database schemas using SQL for storage and retrieval of application data. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Work closely with data engineers and analysts to understand requirements and implement data processing pipelines using Pandas and PySpark. Optimize application performance, scalability, and reliability through code refactoring, database optimization, and system architecture improvements. Debug and resolve technical issues reported in production environments. Stay updated with emerging technologies and industry best practices to continuously improve software development processes and methodologies. Requirements 2 to 3 years of experience. Exceptional analytical and problem-solving aptitude. Interpersonal, communication, and collaboration skills. Bachelor's degree in computer science, information science, or similar. Hands-on experience in Python (Django, SQL, Pandas, Pyspark)

Posted 1 week ago

Apply

2.0 - 5.0 years

1 - 3 Lacs

Chennai

On-site

JOB DESCRIPTION Role : Associate - Administration Experience : 2 to 5 Years Job Location : Chennai About OJ Commerce: OJ Commerce (OJC), a rapidly expanding and profitable online retailer, is headquartered in Florida, USA, with a fully-functional office in Chennai, India. We deliver exceptional value to our customers by harnessing cutting-edge technology, fostering innovation, and establishing strategic brand partnerships to enable a seamless, enjoyable shopping experience featuring high-quality products at unbeatable prices. Our advanced, data-driven system streamlines operations with minimal human intervention. Our extensive product portfolio encompasses over a million SKUs and more than 2,500 brands across eight primary categories. With a robust presence on major platforms such as Amazon, Walmart, Wayfair, Home Depot, and eBay, we directly serve consumers in the United States. As we continue to forge new partner relationships, our flagship website www.ojcommerce.com, has rapidly emerged as a top-performing e-commerce channel, catering to millions of customers annually. Job Summary We’re looking for a reliable and proactive Office Administrator to keep our workplace running smoothly. You’ll be the go-to person for all things office-related — from managing vendors and supplies to coordinating housekeeping and supporting basic HR and accounts tasks. If you enjoy keeping things organised and making sure everything’s in place, this role is for you. Responsibilities : Oversee day-to-day office operations to ensure everything runs smoothly and efficiently. Coordinate with building management and promptly resolve any maintenance issues. Supervise housekeeping staff, maintaining a clean, organized, and guest-ready office environment at all times. Schedule deep cleaning on alternate Saturdays and ensure the housekeeping team is well-trained through the vendor. Monitor office supplies and restock proactively to avoid shortages. Maintain accurate and accessible records — both physical and digital — for easy retrieval when needed. Manage relationships with vendors for maintenance, IT, security, and other office services. Source and negotiate with cost-effective vendors that meet our quality and budget standards. Skills : Bachelor’s degree (B.Com, BBA, BA preferred). 2 to 5 years of experience in office admin or similar role. Comfortable communicating in English and Tamil . Organised, detail-oriented, and able to juggle multiple things at once. Hands-on with MS Office tools (Word, Excel, Outlook); knowledge of Office 365 is a bonus. A discreet and trustworthy professional who can handle sensitive information with care. Basic understanding of HR and admin processes. Experience in Indian corporates or mid-sized firms. Familiarity with statutory compliance (PF, ESIC, TDS documentation, etc.) What we Offer Competitive salary Medical Benefits/Accident Cover Flexi Office Working Hours Fast paced start up

Posted 1 week ago

Apply

5.0 years

3 - 8 Lacs

Noida

On-site

Job Information Department Name RULES Date Opened 04/10/2025 Department IP Services Job Type Permanent Industry Intellectual property City Noida State/Province Uttar Pradesh Country India Zip/Postal Code 201301 About Us MaxVal started as an IP services company in 2004, with a keen focus on efficiency, cost-effectiveness, and continuous improvement through metrics-based processes. Our focus on these core values led to the tech-enablement of our offerings even before this buzzword became an industry standard. Over the years, MaxVal developed many internal applications to increase our quality and efficiency, and customer satisfaction. As these systems grew and became more sophisticated, we have productized them and offered them to our clients. Today, MaxVal serves over 600 clients across the full IP life cycle with the industry’s leading products and services. Our 725 plus employees represent the most IP and tech-savvy individuals in the industry." At MaxVal, we do the right things and innovate ceaselessly as a winning team to achieve customer success and employee success. Job Description Job Summary The Project Lead will be responsible for managing work allocation, tracking project progress, conducting quality audits, and ensuring that the team meets deadlines. This role also involves handling ad-hoc projects and auditing standard documentation to maintain consistency and quality in all aspects of IP rule development. Key Responsibilities Quality Audits and Compliance: Conduct regular quality audits of IP rule documentation and related processes to ensure adherence to established standards and best practices. Ad-Hoc Project Management: Lead and manage ad-hoc projects related to enhancement of Symphony features. Coordinate resources and timelines for Ad-hoc projects, ensuring they are completed successfully and within specified deadlines. Maintain Documentation and Reports containing daily progress of ongoing projects and rule development initiatives, ensuring that milestones and deadlines are met. Track the status of each project, providing regular updates to management and stakeholders regarding timelines and deliverables. Prosecution, Annuity and Renewal Fee management – Oversee the timely updating of the system with relevant fee information and regularly validate fee data to ensure accuracy. Agent management – Act as a liaison to obtain prosecution, annuity and renewal fees along with contractual agreements where necessary; regularly monitor the agent for performance with the internal teams. MaxForms Management – Manage the MaxForms tool, which enables users to easily fill, validate, and generate official PTO forms in an automated manner. Documentation and Reporting: Audit and review standard documentation related to IP rule development, ensuring that all materials are up-to-date, accurate, and aligned with current practices. Review rules set for accuracy, consistency, and compliance, identifying areas for improvement and recommending corrective actions. Ensure that the team follows quality control procedures to maintain the highest standards in rule creation, testing, and deployment. Ensure that all documentation is properly organized and stored for easy retrieval by relevant teams and stakeholders. JIRA Management - Manage the intake of all tickets from the queue and assign them to the appropriate team member based on the nature of the request. Preferred Qualifications 5+ years of experience in IP, project management, or a related role, with at least 2 years in a leadership position. Strong knowledge of intellectual property laws and regulations. Experience in managing cross-functional teams and complex projects with multiple stakeholders. Familiarity with IP management databases, software tools, and rule-based systems. Excellent organizational and time-management skills, with the ability to manage multiple priorities simultaneously. Strong attention to detail, with a focus on maintaining high standards for quality and accuracy. Exceptional communication and interpersonal skills, with the ability to work effectively with diverse teams and stakeholders. Proficiency in Microsoft Office Suite and project management tools (e.g., Jira). Desirable Skills: Bachelor’s degree in law, Business Administration, or a related field preferable. A focus on IP is highly desirable. Experience with Agile project management methodologies. Knowledge of IP rule engines and IP management systems. Experience with process improvement methodologies, such as Six Sigma or Lean. Requirements Preferred Qualifications 5+ years of experience in IP, project management, or a related role, with at least 2 years in a leadership position. Strong knowledge of intellectual property laws and regulations. Experience in managing cross-functional teams and complex projects with multiple stakeholders. Familiarity with IP management databases, software tools, and rule-based systems. Excellent organizational and time-management skills, with the ability to manage multiple priorities simultaneously. Strong attention to detail, with a focus on maintaining high standards for quality and accuracy. Exceptional communication and interpersonal skills, with the ability to work effectively with diverse teams and stakeholders. Proficiency in Microsoft Office Suite and project management tools (e.g., Jira). Desirable Skills: Bachelor’s degree in law, Business Administration, or a related field preferable. A focus on IP is highly desirable. Experience with Agile project management methodologies. Knowledge of IP rule engines and IP management systems. Experience with process improvement methodologies, such as Six Sigma or Lean.

Posted 1 week ago

Apply

1.0 - 3.0 years

2 - 3 Lacs

India

On-site

Objective: Ensure robust financial operations, full compliance, and process excellence to support scalable growth and impact at our social impact-martech startup Company Overview The House of DoBe is your new purpose engine. We’re building a community that fuels pro-sociability: your yin yang with a big bang, meaning, small deeds that matter to you, shared causes you have always held close to your head and heart, and the quiet reengineering of our cognitively overloaded culture with prosocial motivation, ability, and skill. Do a little. To do a lot. If you are a real doer who still believes in simple human values of K.A.R.M.A.® Kindness, Altruism, Righteousness, Mindfulness, and Authenticity, join us to aggregate, re-engineer, and incentivize human pursuit of pro-sociability for a purpose economy. We are solving for the lost or otherwise ignored 21st-century skill of civic empathy in the times of fast technology. We are powered by Impresario Global (I.M), a Social Impact MarTech startup in the business of cause amplification. Website- , Key Responsibilities: Financial Operations & Oversight Maintain and monitor day-to-day accounting operations (bookkeeping, reconciliation, ledgers). Account reconciliations, vendor payments, and documentation. Support end-to-end P&L tracking, including budget vs. actual reporting, variance analysis, and financial forecasting. Maintain physical records in a systematic and well-organized manner to facilitate easy retrieval and compliance. Statutory & Regulatory Compliance Ensure timely and accurate filings under Companies Act with MCA (Ministry of Corporate Affairs). Oversee statutory audits, internal audits, and facilitate coordination with auditors and consultants. Ensure full compliance with Income Tax, TDS, and GST regulations including monthly filings and annual returns. Support annual reporting obligations and Secretarial Audit, maintaining Registers and Minutes as required under the Companies Act. Legal & Shareholding Matters Maintain up-to-date documentation and compliance around Shareholder Agreements (SHA) and Board Resolutions. Liaise with legal and compliance advisors to ensure alignment on corporate governance protocols. Maintain cap tables, investor documents, and support fundraising compliance as needed. Legal Drafting – Board Resolutions, Letters, Communication with government authorities. Business Process Excellence Design and implement SOPs for recurring financial and compliance tasks. Identify bottlenecks in finance and compliance workflows and initiate process improvements. Prepare monthly dashboards and MIS reports for leadership with financial and compliance KPIs. Indicative KPIs KPI Metric & Target Financial Close Cycle Time ≤ 5 business days after month-end close Reconciliation Accuracy ≥ 99 % of accounts reconciled with zero discrepancies Regulatory Filing Punctuality 100 % on-time filings for MCA, GST, Income Tax, and TDS SOP Adoption Rate ≥ 90 % of routine tasks covered by documented SOPs MIS Report Delivery 100 % of monthly dashboards submitted by the 5th of each month Qualifications & Skills Must-Haves: Professional certification – CA or CS 1-3 years of experience. Working knowledge of Tally, Zoho Books, QuickBooks, or other ERP systems is a plus. Proficient in handling GST, IT returns, MCA filings, and banking documentation. Sound understanding of Companies Act, Income Tax Act, GST Act and compliance frameworks. Strong analytical, organizational, and communication skills. Integrity, attention to detail, and a proactive problem-solving attitude. Culture Fit Value Observable Behaviours Integrity Upholds the highest ethical standards; ­accurate and honest. Process-Oriented Values structure and documentation; follows workflows rigorously. Ownership Takes initiative to identify gaps and drive solutions. Collaborative Works seamlessly with finance, legal, and operations teams. Growth Mindset Welcomes feedback; continuously refines skills and processes. Location: Onsite in Lucknow Work Timings: 9:30 AM to 6 PM, Monday to Friday from office Reporting: Chief of Staff, Founders Office Job Type: Full-time Pay: ₹20,000.00 - ₹25,000.00 per month Schedule: Day shift Monday to Friday Work Location: In person

Posted 1 week ago

Apply

1.0 years

1 - 2 Lacs

India

On-site

o Providing application forms to applicants and maintenance of up to date record of applications database to ensure proper filing and retrieval of CVs from database. o Coordinating recruitment process activities such as lining up suitable candidates and calling them for interviews, scheduling interviews, checking original documents, etc o Maintaining personnel file of every employee both in the soft as well as hard copy files and updating them as and when required. o Carrying out joining formalities like issuing appointment letter, uniform, ID card, orientation of the hospital and other formalities o Updating the time machine with finger prints and other details of new joinee. o Coordinate interdepartmentally for conducting HR induction trainings o Doing monthly induction training for induction of the new joinees o Update the induction presentation from time to time as per requirement o Preparing offer letters, appointment letters, increment letter, experience letters, salary certificates, appreciation letters,circulars, memos, notice etc. as per the instruction of the Head – HR/CEO. o Preparation of Full & final settlement, acceptance of resignation, of Hospital employees leaving the organization o Ensuring exit interviews of candidates are conducted and maintaining the data record of same. Job Type: Full-time Pay: ₹15,000.00 - ₹20,000.00 per month Benefits: Provident Fund Schedule: Day shift Supplemental Pay: Yearly bonus Ability to commute/relocate: Shahibaug, Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Preferred) Experience: HR: 1 year (Preferred) total work: 2 years (Preferred) Language: English (Preferred) Work Location: In person

Posted 1 week ago

Apply

1.0 years

1 - 2 Lacs

India

On-site

* Providing application forms to applicants and maintenance of up to date record of applications database to ensure proper filing and retrieval of CVs from database. * Carrying out joining formalities like issuing appointment letter, uniform, ID card, orientation of the hospital and other formalities * Maintaining personnel file of every employee both in the soft as well as hard copy files and updating them as and when required * Updating the time machine with finger prints and other details of new joinees * Coordinate interdepartmentally for conducting HR induction trainings * Assist to prepare documents and letters related to HR * Collecting shift schedule from departments and updating the same in the time office machines & software * Ensuring exit interviews of candidates are conducted and maintaining the data record of same Job Type: Full-time Pay: ₹10,000.00 - ₹20,000.00 per month Experience: total work: 1 year (Preferred) Work Location: In person

Posted 1 week ago

Apply

0 years

4 - 7 Lacs

Ahmedabad

On-site

Followings will be the Core Job Responsibilities of the position holder: 1. Responsible for Operation, cleaning and primary maintenance of negative isolator, Vibro sifter, Bin blenders and all other GEA granulation line equipment. 2. Responsible for recording of activity in logbooks, Batch manufacturing record & complete documentation as required in the manufacturing of product and the cleaning of facilities and equipment, following cGMP’s and good documentation practices. 3. To perform all in-process checks and monitoring of all intermediate processes in granulation. 4. To select recipes and set process parameters in Blender HMI & ensure its correctness before blender revolution. 5. Responsible for Issuance, utilization, cleaning, and retrieval of sieves and proper handling of machine change parts with its inventory. 6. Responsible for set up, changeover, and operation of various manufacturing equipment’s but not limited to granulation department. 7. Responsible for reporting and/or escalating any conditions or problems that may affect the quality or integrity of product to supervisor and HOD production. 8. Responsible for maintaining a neat, clean and safe working environment always and notifies supervisor immediately of any safety concerns, accidents or injuries observed. 9. To ensure all in-process checks and monitoring of all intermediate processes, to check set process parameters in PLC/SCADA as per BMR before machine run in granulation area. 10. Compliance of current Good Manufacturing Practices in the Hormone Facility & to follow GDP with data-integrity compliance. 11. To Complete the training and training record as per stipulated time. 12. Responsible for preparation and usage of Disinfectant and cleaning agent solution as per defined procedures. 13. To adhere to all Company policies, procedures, SOPs and Safety regulations.

Posted 1 week ago

Apply

0 years

6 Lacs

India

On-site

Job title : Executive Assistant Location: Kolkata (Local candidate only) Terms: Full Time Required position: 1 About Roles & Responsibilities : -- Administrative Support: Provide administrative support to executives, including managing schedules, organizing meetings, and handling correspondence. Assist in the coordination and management of special projects. -- Calendar Management: Manage and coordinate the executive's calendar, schedule appointments, and arrange meetings, ensuring that the executive is aware of their daily agenda. Prepare meeting agendas, materials, and presentations. Attend meetings, take minutes, and follow up on action items.. -- Communication: Act as a liaison between the executive and other staff members, clients, and external stakeholders.Draft emails, memos, reports, and other documents on behalf of the executive. -- Information Management: Organize and maintain files, records, and documents.Retrieve information as needed and ensure that sensitive information is handled confidentially. -- Professionalism: Demonstrate a high level of professionalism and discretion. Executive assistants often have access to sensitive information and must maintain confidentiality. -- Relationship Building: Build and maintain positive relationships with colleagues, clients, and other stakeholders. Act as a representative of the executive and the organization. -- Documentation and Confidentiality: Maintain accurate records and documentation. Create organized filing systems for easy retrieval of information. Uphold and maintain a high level of confidentiality. Handle sensitive information with discretion. -- Professional Development & Problem-Solving: : Participate in relevant training and development opportunities. Stay informed about industry trends and best practices. Exhibit the ability to proactively identify and resolve issues. Anticipate needs and provide solutions before problems arise. -- Travel Management: Coordinate travel arrangements efficiently. Ensure all travel logistics are well-planned and executed. -- Feedback and Relationship Building: Seek feedback from the executive for continuous improvement.Build positive relationships with colleagues and external contacts. Qualifications: - Bachelors degree in business administration or related field preferred. Strong written and verbal communication skills. Should be fluent in languages - English, Hindi, and Bengali. Proficient in Word, Excel, PowerPoint, Outlook etc. Job Type: Full-time Pay: Up to ₹50,000.00 per month Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

On-site

Hiring For AI Enginner - Python Developer :- Job Description:- We are seeking a talented Python Developer with hands-on experience in AI chatbot development and familiarity with Model Context Protocol (MCP) to join our AI team. You will be responsible for developing intelligent, context-aware conversational systems that integrate seamlessly with our internal knowledge base and enterprise services. The ideal candidate is technically proficient ,proactive, and capable of translating complex AI interactions into scalable backend solutions. Key Responsibilities 1. Design and develop robust AI chatbots using Python and integrate them with LLM APIs(e.g., OpenAI, Google AI, etc.). 2. Implement and manage Model Context Protocol (MCP) for optimize context injection, session management, and model-aware interactions. 3.Build and maintain secure pipelines for knowledge base access that allow the chatbot to accurately respond to internal queries. 4.Work with internal teams to define and evolve the contextual metadata strategy (roles, user state, query history, etc.). 5.Contribute to internal tooling and framework development for contextual AI applications. Required Skills & Experience 1. 3+ years of professional Python development experience. 2. Proven track record in AI chatbot development, particularly using LLMs. 3. Understanding of Model Context Protocol (MCP) and its role in enhancing AI interactionfidelity and relevance. 4. Strong experience integrating with AI APIs (e.g., OpenAI, Azure OpenAI). 5. Familiarity with Retrieval-Augmented Generation (RAG) pipelines and vector-basedsearch (e.g., Pinecone, Weaviate, FAISS). 6. Experience designing systems that ingest and structure unstructured knowledge (e.g., PDF,Confluence, Google Drive docs). 7. Comfortable working with RESTful APIs, event-driven architectures, and context-awareservices.8.Good understanding of data handling, privacy, and security standards related to enterpriseAI use. Job Location: Indore Joining: Immediate Share resume at talent@jstechalliance.com or can Contact Here :- 0731-3122400 WhatsApp : 8224006397 Job Type: Full-time Application Question(s): Immediate Joiner Have you completed your Bachelor's\Master's Degree? Experience: Python: 3 years (Required) Model Context Protocol (MCP): 3 years (Required) LLM APIs: 3 years (Required) Artificial Intelligence: 2 years (Required) Location: Vijay Nagar, Indore, Madhya Pradesh (Required) Work Location: In person

Posted 1 week ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Data Scientist Job Type: Full-time Location: Bengaluru Notice Period: 15 days or immediate joiner Experience: 0-2 Years Job Summary We seek a highly skilled Data Scientist (LLM) to join our AI and Machine Learning team. The ideal candidate will have a strong foundation in Machine Learning (ML), Deep Learning (DL), and Large Language Models (LLMs) , along with hands-on experience in building and deploying conversational AI/chatbots . The role requires expertise in LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . You will work closely with cross-functional teams to drive the development and enhancement of AI-powered applications. Key Responsibilities Develop, fine-tune, and deploy Large Language Models (LLMs) for various applications, including chatbots, virtual assistants, and enterprise AI solutions. Build and optimize conversational AI solutions with at least 1 year of experience in chatbot development. Implement and experiment with LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph. Design and develop ML/DL-based models to enhance natural language understanding capabilities. Work on retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) to enhance LLM-based applications. Optimize and fine-tune transformer-based models such as GPT, LLaMA, Falcon, Mistral, Claude, etc., for domain-specific tasks. Develop and implement prompt engineering techniques and fine-tuning strategies to improve LLM performance. Work on AI agents, multi-agent systems, and tool-use optimization for real-world business applications. Develop APIs and pipelines to integrate LLMs into enterprise applications. Research and stay up-to-date with the latest advancements in LLM architectures, frameworks, and AI trends. Required Skills & Qualifications 0-2 years of experience in Machine Learning (ML), Deep Learning (DL), and NLP-based model development. Hands-on experience in developing and deploying conversational AI/chatbots is Plus Strong proficiency in Python and experience with ML/DL frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers. Experience with LLM agent development frameworks like LangChain, LlamaIndex, AutoGen, LangGraph. Knowledge of vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) and embedding models. Understanding of Prompt Engineering and Fine-tuning LLMs. Familiarity with cloud services (AWS, GCP, Azure) for deploying LLMs at scale. Experience in working with APIs, Docker, FastAPI for model deployment. Strong analytical and problem-solving skills. Ability to work independently and collaboratively in a fast-paced environment. Good To Have Experience with Multi-modal AI models (text-to-image, text-to-video, speech synthesis, etc.). Knowledge of Knowledge Graphs and Symbolic AI. Understanding of MLOps and LLMOps for deploying scalable AI solutions. Experience in automated evaluation of LLMs and bias mitigation techniques. Research experience or published work in LLMs, NLP, or Generative AI is a plus.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies