Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a GenAI Developer at Vipracube Tech Solutions, you will be responsible for developing and optimizing AI models, implementing AI algorithms, collaborating with cross-functional teams, conducting research on emerging AI technologies, and deploying AI solutions. This full-time role requires 5 to 6 years of experience and is based in Pune, with the flexibility of some work from home. Your key responsibilities will include fine-tuning large language models tailored to marketing and operational use cases, building Generative AI solutions using various platforms like OpenAI (GPT, DALLE, Whisper) and Agentic AI platforms such as LangGraph and AWS Bedrock. You will also be building robust pipelines using Python, NumPy, Pandas, applying traditional ML techniques, handling CI/CD & MLOps, using AWS Cloud Services, collaborating using tools like Cursor, and effectively communicating with stakeholders and clients. To excel in this role, you should have 5+ years of relevant AI/ML development experience, a strong portfolio of AI projects in marketing or operations domains, and a proven ability to work independently and meet deadlines. Join our dynamic team and contribute to creating smart, efficient, and future-ready digital products for businesses and startups.,
Posted 1 day ago
5.0 - 8.0 years
14 - 22 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Hiring For Top IT Company- Designation: Python Developer Skills: Python + Pyspark Location :Bang/Mumbai Exp: 5-8 yrs Best CTC 9783460933 9549198246 9982845569 7665831761 6377522517 7240017049 Team Converse
Posted 3 days ago
7.0 - 12.0 years
20 - 30 Lacs
Kolkata, Hyderabad, Bengaluru
Work from Office
Responsibilities Design, develop, and deploy scalable AI/ML solutions using AWS services such as Amazon Bedrock, SageMaker, Amazon Q, Amazon Lex, Amazon Connect, and Lambda. Implement and optimize large language model (LLM) applications using Amazon Bedrock, including prompt engineering, fine-tuning, and orchestration for specific business use cases. Build and maintain end-to-end machine learning pipelines using SageMaker for model training, tuning, deployment, and monitoring. Integrate conversational AI and virtual assistants using Amazon Lex and Amazon Connect, with seamless user experiences and real-time inference. Leverage AWS Lambda for event-driven execution of model inference, data preprocessing, and microservices. Design and maintain scalable and secure data pipelines and AI workflows, ensuring efficient data flow to and from Redshift and other AWS data stores. Implement data ingestion, transformation, and model inference for structured and unstructured data using Python and AWS SDKs. Collaborate with data engineers and scientists to support development and deployment of ML models on AWS. Monitor AI/ML applications in production, ensuring optimal performance, low latency, and cost efficiency across all AI/ML services. Ensure implementation of AWS security best practices, including IAM policies, data encryption, and compliance with industry standards. Drive the integration of Amazon Q for enterprise AI-based assistance and automation across internal processes and systems. Participate in architecture reviews and recommend best-fit AWS AI/ML services for evolving business needs. Stay up to date with the latest advancements in AWS AI services, LLMs, and industry trends to inform technology strategy and innovation. Prepare documentation for ML pipelines, model performance reports, and system architecture. Qualifications we seek in you: Minimum Qualifications Proven hands-on experience with Amazon Bedrock, SageMaker, Lex, Connect, Lambda, and Redshift. Strong knowledge and application experience with Large Language Models (LLMs) and prompt engineering techniques. Experience building production-grade AI applications using AWS AI or other generative AI services. Solid programming experience in Python for ML development, data processing, and automation. Proficiency in designing and deploying conversational AI/chatbot solutions using Lex and Connect. Experience with Redshift for data warehousing and analytics integration with ML solutions. Good understanding of AWS architecture, scalability, availability, and security best practices. Familiarity with AWS development, deployment, and monitoring tools (CloudWatch, CodePipeline, etc.). Strong understanding of MLOps practices including model versioning, CI/CD pipelines, and model monitoring. Strong communication and interpersonal skills to collaborate with cross-functional teams and stakeholders. Ability to troubleshoot performance bottlenecks and optimize cloud resources for cost-effectiveness Preferred Qualifications: AWS Certification in Machine Learning, Solutions Architect, or AI Services. Experience with other AI tools (e.g., Anthropic Claude, OpenAI APIs, or Hugging Face). Knowledge of streaming architectures and services like Kafka or Kinesis.
Posted 5 days ago
2.0 - 4.0 years
18 - 30 Lacs
Bengaluru
Work from Office
Roles and Responsibilities Design, develop, and deploy data science models using Python, AWS, and LLM (Large Language Model) technologies. Collaborate with cross-functional teams to identify business problems and design solutions that leverage gen AI capabilities. Develop scalable data pipelines using Bedrock framework to integrate various data sources into SageMaker model development. Conduct exploratory data analysis, feature engineering, and model evaluation to ensure high accuracy and reliability of results. Provide technical guidance on best practices for implementing machine learning algorithms in production environments. Analyze large, complex healthcare datasets, including electronic health records (EHR) and claims data. Develop statistical models for patient risk stratification, treatment optimization, population health management, and revenue cycle optimization. Build models for clinical decision support, patient outcome prediction, care quality improvement, and revenue cycle optimization Create and maintain automated data pipelines for real-time analytics and reporting Work with healthcare data standards (HL7 FHIR, ICD-10, CPT, SNOMED CT) and ensure regulatory compliance. Develop and deploy models in cloud environments while creating visualizations for stakeholders Present findings and recommendations to cross-functional teams including clinicians, product managers, and executives
Posted 1 week ago
8.0 - 13.0 years
32 - 45 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Job Title: Data Architect Location: Bangalore ,Hyderabad,Chennai, Pune, Gurgaon - hybrid- 2/3 days WFO Experience: 8+ years Position Overview: We are seeking a highly skilled and strategic Data Architect to design, build, and maintain the organizations data architecture. The ideal candidate will be responsible for aligning data solutions with business needs, ensuring data integrity, and enabling scalable and efficient data flows across the enterprise. This role requires deep expertise in data modeling, data integration, cloud data platforms, and governance practices. Key Responsibilities: Architectural Design: Define and implement enterprise data architecture strategies, including data warehousing, data lakes, and real-time data systems. Data Modeling: Develop and maintain logical, physical, and conceptual data models to support analytics, reporting, and operational systems. Platform Management: Select and oversee implementation of cloud and on-premises data platforms (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Databricks). Integration & ETL: Design robust ETL/ELT pipelines and data integration frameworks using tools such as Apache Airflow, Informatica, dbt, or native cloud services. Data Governance: Collaborate with stakeholders to implement data quality, data lineage, metadata management, and security best practices. Collaboration: Work closely with data engineers, analysts, software developers, and business teams to ensure seamless and secure data access. Performance Optimization: Tune databases, queries, and storage strategies for performance, scalability, and cost-efficiency. Documentation: Maintain comprehensive documentation for data structures, standards, and architectural decisions. Required Qualifications: Bachelors or master’s degree in computer science, Information Systems, or a related field. 5+ years of experience in data architecture, data engineering, or database development. Strong expertise in data modeling, relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). Experience with modern data platforms and cloud ecosystems (AWS, Azure, or GCP). Hands-on experience with data warehousing solutions and tools (e.g., Snowflake, Redshift, BigQuery). Proficiency in SQL and data scripting languages (e.g., Python, Scala). Familiarity with data privacy regulations (e.g., GDPR, HIPAA) and security standards. Tech Stack AWS Cloud – S3, EC2, EMR, Lambda, IAM, Snowflake DB Databricks Spark/Pyspark, Python Good Knowledge of Bedrock and Mistral AI RAG & NLP LangChain and LangRAG LLMs Anthropic Claude, Mistral, LLaMA etc.,
Posted 1 week ago
3.0 - 5.0 years
20 - 35 Lacs
Pune
Hybrid
Role Overview :- Monitor, evaluate, and optimize AI/LLM workflows in production environments. Ensure reliable, efficient, and high-quality AI system performance by building out an LLM Ops platform that is self-serve for the engineering and data science departments. Key Responsibilities:- Collaborate with data scientists and software engineers to integrate an LLM Ops platform (Opik by CometML) for existing AI workflows Identify valuable performance metrics (accuracy, quality, etc) for AI workflows and create on-going sampling evaluation processes using the LLM Ops platform that alert when metrics drop below thresholds Cross-team collaboration to create datasets and benchmarks for new AI workflows Run experiments on datasets and optimize performance via model changes and prompt adjustments Debug and troubleshoot AI workflow issues Optimize inference costs and latency while maintaining accuracy and quality Develop automations for LLM Ops platform integration to empower data scientists and software engineers to self-serve integration with the AI workflows they build Requirements:- Strong Python programming skills Experience with generative AI models and tools (OpenAI, Anthropic, Bedrock, etc) Knowledge of fundamental statistical concepts and tools in data science such as: heuristic and non-heuristic measurements in NLP (BLEU, WER, sentiment analysis, LLM-as-judge, etc), standard deviation, sampling rate, and a high level understanding of how modern AI models work (knowledge cutoffs, context windows, temperature, etc) Familiarity with AWS Understanding of prompt engineering concepts People skills: you will be expected to frequently collaborate with other teams to help to perfect their AI workflows Experience Level 3-5 years of experience in LLM/AI Ops, MLOps, Data Science, or MLE
Posted 1 week ago
8.0 - 13.0 years
32 - 45 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Job Title: Data Architect Location: Bangalore ,Hyderabad,Chennai, Pune, Gurgaon - hybrid- 2/3 days WFO Experience: 8+ years Position Overview: We are seeking a highly skilled and strategic Data Architect to design, build, and maintain the organizations data architecture. The ideal candidate will be responsible for aligning data solutions with business needs, ensuring data integrity, and enabling scalable and efficient data flows across the enterprise. This role requires deep expertise in data modeling, data integration, cloud data platforms, and governance practices. Key Responsibilities: Architectural Design: Define and implement enterprise data architecture strategies, including data warehousing, data lakes, and real-time data systems. Data Modeling: Develop and maintain logical, physical, and conceptual data models to support analytics, reporting, and operational systems. Platform Management: Select and oversee implementation of cloud and on-premises data platforms (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Databricks). Integration & ETL: Design robust ETL/ELT pipelines and data integration frameworks using tools such as Apache Airflow, Informatica, dbt, or native cloud services. Data Governance: Collaborate with stakeholders to implement data quality, data lineage, metadata management, and security best practices. Collaboration: Work closely with data engineers, analysts, software developers, and business teams to ensure seamless and secure data access. Performance Optimization: Tune databases, queries, and storage strategies for performance, scalability, and cost-efficiency. Documentation: Maintain comprehensive documentation for data structures, standards, and architectural decisions. Required Qualifications: Bachelors or master’s degree in computer science, Information Systems, or a related field. 5+ years of experience in data architecture, data engineering, or database development. Strong expertise in data modeling, relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). Experience with modern data platforms and cloud ecosystems (AWS, Azure, or GCP). Hands-on experience with data warehousing solutions and tools (e.g., Snowflake, Redshift, BigQuery). Proficiency in SQL and data scripting languages (e.g., Python, Scala). Familiarity with data privacy regulations (e.g., GDPR, HIPAA) and security standards. Tech Stack AWS Cloud – S3, EC2, EMR, Lambda, IAM, Snowflake DB Databricks Spark/Pyspark, Python Good Knowledge of Bedrock and Mistral AI RAG & NLP LangChain and LangRAG LLMs Anthropic Claude, Mistral, LLaMA etc.,
Posted 2 weeks ago
8.0 - 13.0 years
32 - 45 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Job Title: Data Scientist Architect Location: Pan India - hybrid Experience: 8+ years Position Overview: We are seeking a Data Scientist Architect to lead and drive data science and architecture initiatives within Brillio. The ideal candidate will have a deep understanding of data science, data engineering, and architecture, and be highly proficient in implementing cutting-edge solutions using tools like DataBricks, AWS, and Bedrock/Mistral . The role requires an individual with extensive experience in designing, building, and deploying large-scale data systems and machine learning models, along with the ability to lead and mentor cross-functional teams. As a Data Scientist Architect, you will have the opportunity to innovate and make a lasting impact across our diverse client base, providing them with tailored solutions that drive their data strategy forward. Key Responsibilities: Lead Data Architecture & Science Initiatives: Design and implement advanced data architectures and solutions to support complex data science and machine learning workflows. Build and deploy scalable, production-grade data pipelines and models leveraging cloud platforms like AWS and tools like DataBricks. Architect solutions involving large-scale data ingestion, transformation, and storage, focusing on performance, scalability, and reliability. Platform Development & Integration: Implement and manage cloud-based infrastructure for data engineering, analytics, and machine learning on platforms like AWS, leveraging services like S3, Lambda, EC2, etc. Work with Bedrock/Mistral to deploy and manage machine learning models at scale, ensuring continuous optimization and improvement. Skills and Qualifications: Experience: 8+ years of experience in Data Science, Data Architecture with a focus on large-scale data systems and cloud platforms. Proven track record of leading data science architecture projects from inception to deployment. Technical Skills: Proficiency in DataBricks, AWS (S3, EC2, Lambda, Redshift, SageMaker, etc.), and Bedrock/Mistral.
Posted 3 weeks ago
6.0 - 9.0 years
14 - 22 Lacs
Pune, Chennai
Work from Office
Hiring For Top IT Company- Designation: Python Developer Skills: AWS SDK +AI services integration Location :Pune/Chennai Exp: 6-8 yrs Best CTC Surbhi:9887580624 Anchal:9772061749 Gitika:8696868124 Shivani:7375861257 Team Converse
Posted 3 weeks ago
8.0 - 13.0 years
20 - 25 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Job Description: Agentic AI solution designer: Strong one. .Someone who is well verse with AWS bedrock agentic AI solution. However at same time also able to support other open source agentic frameworks and hyperscaler based solution design. This person has to be vocal, good communication as will be working with client delivery team in close proximity. AI consultant : hand on expert that can look at the design and convert to the prototyping in 2-3 week time frame so that it could be demoed to client. AWS Bedrock Competence: Demonstrated proficiency in utilizing and integrating with Amazon Bedrock, including a strong understanding of its foundation models and capabilities. GenAI Agents: Deep understanding of the concepts, architectures, and practical implementation of Generative AI Agents, including experience with relevant frameworks and methodologies. Python Language: Excellent programming skills in Python, with experience in developing and deploying applications. Solution Design Leadership: Lead the technical design and architecture of innovative GenAI Agents App solutions tailored to meet specific customer requirements and business objectives. Customer Engagement: Act as the primary technical point of contact during the pre-sales process, effectively communicating the value proposition and technical capabilities of our GenAI Agents offerings to both technical and non-technical audiences. Requirements Gathering: Deeply understand customer business challenges and translate them into clear, concise technical requirements and solution specifications. Proposal Development: Create compelling technical proposals, presentations, and demonstrations that showcase our GenAI Agents App solutions and their potential impact. Technical Consultation & collaboration: Work closely and provide expert technical guidance and support to the sales team service delivery and customers units throughout the pre-sales cycle. Solution Presentation: Confidently present proposed solutions, address technical questions, and articulate the technical advantages of our offerings. Hands-on Development: Actively participate in the design, development, and prototyping of new features, functionalities, and improvements for our GenAI Agents platform. Proof of Concept (POC) Development: Build and demonstrate functional POCs to validate technical feasibility and explore new application areas for GenAI Agents. Code Contribution: Write efficient Python code for R&D projects.
Posted 1 month ago
5.0 - 8.0 years
0 - 2 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
AI/ML engineer or GenAI developer who will leverage Amazon Bedrock to build and optimize intelligent systems for automated email organization, with a strong emphasis on prompt engineering, collaboration, testing, and adherence to best practices. Develops and fine-tunes Bedrock-based GenAI models for email categorization Implements prompt engineering techniques for accurate email processing Collaborates with data engineers to optimize email categorization workflows Conducts testing to refine prompt structures and improve model responses Enables compliance with AI governance and security best practices Works with business users to align AI-generated responses with organizational needs
Posted 1 month ago
6.0 - 8.0 years
15 - 18 Lacs
Hyderabad
Work from Office
Job Title: Amazon Connect Development Expert Location: Hyderabad (On-site/Hybrid as applicable) Notice : Immediate to 15 days Job Summary: We are looking for an experienced Amazon Connect Development Expert to design and implement cloud-native contact center solutions using AWS services. This role demands strong expertise in Amazon Connect, Lex, Lambda, and generative AI services like Amazon Q or Bedrock. If youre passionate about building intelligent, scalable, and secure contact center solutions, this is the perfect opportunity to leverage your skills. Key Responsibilities: Architect, develop, and deploy cloud-based contact center solutions using Amazon Connect . Integrate Amazon Lex to create intelligent IVR and conversational flows. Build and manage backend logic using AWS Lambda (Node.js, Python, etc.) for real-time processing. Enhance customer interactions by integrating Amazon Q / Bedrock for generative AI capabilities. Collaborate with cross-functional teams to gather requirements and deliver end-to-end solutions. Ensure scalability, security, and availability using AWS best practices. Monitor, troubleshoot, and optimize solutions with Amazon CloudWatch and other tools. Mandatory Skills: Amazon Connect: Deep expertise in contact flow design, routing profiles, and integrations. Amazon Lex: Hands-on experience building voice and chat bots. AWS Lambda: Proficiency in writing serverless functions ( Node.js, Python ). Amazon Q / Bedrock: Experience in integrating generative AI into customer support workflows. Strong programming skills in Node.js or Python . Good to Have Skills: Amazon DynamoDB: NoSQL data modeling and querying. Amazon S3: Data storage and management for call recordings/logs. Amazon CloudWatch: Monitoring, logging, and alert setup. AWS Secrets Manager: Secure access management. Amazon Kinesis: Real-time streaming and analytics. Amazon EventBridge: Event-driven architecture and integrations. Amazon API Gateway: API development and security. Qualifications: Bachelor’s degree in Computer Science , Engineering, or a related field (or equivalent experience). Minimum of 3 years hands-on experience with Amazon Connect and related AWS services. AWS Certifications (e.g., AWS Certified Developer, Solutions Architect) are a plus. Preferred Traits: Strong analytical and problem-solving abilities. Excellent communication and documentation skills. Comfortable working in Agile/Scrum environments
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France