Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
haryana
On-site
About StatusNeo: At StatusNeo, we are committed to redefining the way businesses operate. As a leader in digital transformation, we leverage cutting-edge technologies and innovative strategies to empower organizations around the globe. Our partnerships with industry giants and our commitment to continuous learning and improvement provide an unparalleled platform for professional growth. Embrace a career at StatusNeo, where we value diversity, inclusivity and foster a hybrid work culture. Role: Data Engineer Location: Gurugram Key experience: - 3+ years of experience with AWS services including SQS, S3, Step Functions, EFS, Lambda, and OpenSearch. - Strong experience in API integrations, including experience working with large-scale API endpoints. - Proficiency in PySpark for data processing and parallelism in large-scale ingestion pipelines. - Experience with AWS OpenSearch APIs for managing search indices. - Terraform expertise for automating and managing cloud infrastructure. - Hands-on experience with AWS SageMaker, including working with machine learning models and endpoints. - Strong understanding of data flow architectures, document stores, and journal-based systems. - Experience in parallelizing data processing workflows to meet strict performance and SLA requirements. - Familiarity with AWS tools like CloudWatch for monitoring pipeline performance. Additional Preferred Qualifications: - Strong problem-solving and debugging skills in distributed systems. - Prior experience in optimizing ingestion pipelines with a focus on cost-efficiency and scalability. - Solid understanding of distributed data processing and workflow orchestration in AWS environments. Soft Skills: - Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams. - Ability to work in a fast-paced environment and deliver high-quality results under tight deadlines. - Analytical mindset, with a focus on performance optimization and continuous improvement.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You have a minimum of 3 years of experience working with various AWS services such as SQS, S3, Step Functions, EFS, Lambda, and OpenSearch. Your role will involve handling API integrations, particularly with large-scale endpoints. Proficiency in PySpark is required for data processing and parallelism within large-scale ingestion pipelines. Additionally, you should be familiar with AWS OpenSearch APIs for managing search indices. Your responsibilities will include utilizing Terraform expertise to automate and manage cloud infrastructure. Hands-on experience with AWS SageMaker is necessary, including working with machine learning models and endpoints. A strong understanding of data flow architectures, document stores, and journal-based systems is expected from you. Experience in parallelizing data processing workflows to meet performance and SLA requirements is essential. Familiarity with AWS tools like CloudWatch for monitoring pipelines is preferred. You should also possess strong problem-solving and debugging skills within distributed systems. Prior experience in optimizing ingestion pipelines for cost-efficiency and scalability is an advantage, along with a solid understanding of distributed data processing and workflow orchestration in AWS environments. In terms of soft skills, effective communication and collaboration skills are necessary for seamless teamwork across different functions. The ability to thrive in a fast-paced environment and deliver high-quality results within tight deadlines is crucial. An analytical mindset focused on performance optimization and continuous improvement will be beneficial in this role.,
Posted 3 weeks ago
10.0 - 20.0 years
35 - 40 Lacs
Mumbai
Work from Office
Job Title: Data Science Expert (Mentor & Trainer) Location: Onsite Mumbai, India Employment Type: Full-Time About the Role: We are seeking an experienced and highly skilled Data Science Expert to join our growing team at our Mumbai office. This is a full-time, onsite role focused not only on solving complex data problems but also on mentoring and training Junior Data Science Engineers . The ideal candidate will bring deep technical expertise in data science and machine learning, along with a passion for teaching and developing talent. Key Responsibilities: Lead the development of end-to-end data science solutions using advanced ML, NLP, and Computer Vision techniques. Train, mentor, and support junior data science engineers in coding, model development, and best practices. Architect and implement AI-driven solutions such as chatbots, OCR systems, and facial recognition applications. Translate complex business problems into actionable data science projects and deliver measurable results. Design and lead internal workshops, code reviews, and learning sessions to upskill the team. Collaborate with engineering and product teams to deploy models and insights into production environments. Stay abreast of the latest AI/ML trends and integrate cutting-edge techniques into projects where applicable. Desired Skills & Qualifications: Experience: 6+ years in Data Science/Machine Learning with at least 12 years of team mentoring or leadership experience. Education: Bachelors or Masters degree in Computer Science, Data Science, Statistics, or a related field. Technical Expertise Required: Strong proficiency in Python and SQL ; R is a plus. Solid hands-on experience with Deep Learning and Neural Networks , particularly in Natural Language Processing (NLP) , Generative AI , and Computer Vision . Familiarity with frameworks and libraries such as TensorFlow , Keras , PyTorch , OpenCV , SpaCy , NLTK , BERT , ELMo , etc. Experience developing Chatbots , OCR , and Face Recognition systems is preferred. Hands-on knowledge of cloud platforms (AWS, Azure, or Google Cloud Platform). Experience applying statistical and data mining techniques such as GLM , regression , clustering , random forests , boosting , decision trees , etc. Strong understanding of model validation , performance tuning , and deployment strategies . Soft Skills: Excellent communication and presentation skills, especially in explaining complex models to non-technical audiences. Demonstrated ability to mentor, train, and lead junior team members effectively. Strong analytical and problem-solving mindset with a detail-oriented approach. What We Offer: Competitive salary and benefits A collaborative and intellectually stimulating environment Career growth and leadership development opportunities within a fast-paced team
Posted 3 weeks ago
2.0 - 7.0 years
15 - 20 Lacs
Hyderabad
Work from Office
Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. General Summary Preferred Qualifications 3+ years of experience as a Data Engineer or in a similar role Experience with data modeling, data warehousing, and building ETL pipelines Solid working experience with Python, AWS analytical technologies and related resources (Glue, Athena, QuickSight, SageMaker, etc.,) Experience with Big Data tools , platforms and architecture with solid working experience with SQL Experience working in a very large data warehousing environment, Distributed System. Solid understanding on various data exchange formats and complexities Industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets Strong data visualization skills Basic understanding of Machine Learning; Prior experience in ML Engineering a plus Ability to manage on-premises data and make it inter-operate with AWS based pipelines Ability to interface with Wireless Systems/SW engineers and understand the Wireless ML domain; Prior experience in Wireless (5G) domain a plus Education Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline Preferred QualificationsMasters in CS/ECE with a Data Science / ML Specialization Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field OR PhD in Engineering, Information Systems, Computer Science, or related field. 3+ years of experience with Programming Language such as C, C++, Java, Python, etc. Develops, creates, and modifies general computer applications software or specialized utility programs. Analyzes user needs and develops software solutions. Designs software or customizes software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. Modifies existing software to correct errors, allow it to adapt to new hardware, or to improve its performance. Analyzes user needs and software requirements to determine feasibility of design within time and cost constraints. Confers with systems analysts, engineers, programmers and others to design system and to obtain information on project limitations and capabilities, performance requirements and interfaces. Stores, retrieves, and manipulates data for analysis of system capabilities and requirements. Designs, develops, and modifies software systems, using scientific analysis and mathematical models to predict and measure outcome and consequences of design. Principal Duties and Responsibilities: Completes assigned coding tasks to specifications on time without significant errors or bugs. Adapts to changes and setbacks in order to manage pressure and meet deadlines. Collaborates with others inside project team to accomplish project objectives. Communicates with project lead to provide status and information about impending obstacles. Quickly resolves complex software issues and bugs. Gathers, integrates, and interprets information specific to a module or sub-block of code from a variety of sources in order to troubleshoot issues and find solutions. Seeks others' opinions and shares own opinions with others about ways in which a problem can be addressed differently. Participates in technical conversations with tech leads/managers. Anticipates and communicates issues with project team to maintain open communication. Makes decisions based on incomplete or changing specifications and obtains adequate resources needed to complete assigned tasks. Prioritizes project deadlines and deliverables with minimal supervision. Resolves straightforward technical issues and escalates more complex technical issues to an appropriate party (e.g., project lead, colleagues). Writes readable code for large features or significant bug fixes to support collaboration with other engineers. Determines which work tasks are most important for self and junior engineers, stays focused, and deals with setbacks in a timely manner. Unit tests own code to verify the stability and functionality of a feature. Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 4 weeks ago
14.0 - 20.0 years
14 - 22 Lacs
Bengaluru, Karnataka, India
On-site
Job description Join the AI team at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organisations. We work in a highly collaborative environment, and you will partner with a highly cross functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our innovative new AI framework. We value execution, clear communication, feedback and making learning fun. Your impact - You will: Architect, design, implement, test and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps Analyze and provide feedback on product strategy and technical feasibility Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Actively communicate with, encourage and motivate all levels of staff. Be a domain expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements Troubleshoot complex production issues and work with support and customers as needed Drives long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Required Skills: 14+ years of experience in building highly scalable Software-as-a-Service applications/ platform Experience building technical architectures that address complex performance issues Thrive in dynamic environments, working on cutting edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt Deep knowledge of object oriented programming and experience with at least one object oriented programming language, preferably Java Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc) Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Demonstrated ability to drive long-term design strategies that span multiple complex projects Experience delivering technical reports and presentations to customers and at industry events Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues Experience with the full software lifecycle in highly agile and ambiguous environments Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management and/or client SDKs development Experience with machine learning or cloud technology platforms like AWS sagemaker, terraform, spinnaker, EKS, GKE Experience with AI/ML and Data science, including Predictive and Generative AI Experience with data engineering, data pipelines or distributed systems Experience with continuous integration (CI) and continuous deployment (CD), and service ownership Familiarity with Salesforce APIs and technologies Ability to support/resolve production customer escalations with excellent debugging and problem solving skills BENEFITS & PERKS Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community
Posted 4 weeks ago
5.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
Tezo is a new generation Digital & AI solutions provider, renowned for delivering exceptional outcomes to customers through cutting-edge analytics, data proficiency, technology, and digital excellence. We are currently looking for a highly experienced and dynamic Practice Head in Data Science & AI to spearhead our data practice in Hyderabad. This role is tailored for a technology leader well-versed in Data Science, Artificial Intelligence (AI), and Machine Learning (ML), with a track record of building and expanding data practices. The ideal candidate should possess strong business acumen, experience in solution selling, and pre-sales expertise. As the Practice Head in Data Science & AI at Tezo, you will have the following key responsibilities: Leadership & Strategy: - Create and execute the strategic vision for the Data Science and AI practice. - Form and lead a high-performing team comprising data scientists, ML engineers, and AI experts. - Collaborate with various teams to embed data-driven solutions within broader business strategies. Technical Expertise: - Oversee the design and implementation of advanced AI/ML solutions across diverse domains. - Stay updated on industry trends, emerging technologies, and best practices in AI, ML, and data science. - Offer technical guidance and hands-on support for key initiatives. Practice Development: - Establish frameworks, methodologies, and best practices to enhance the scalability of the data science practice. - Define and deploy reusable components, accelerators, and IPs to streamline solution delivery. Client Engagement & Pre-Sales: - Assist in business development by collaborating with sales teams to identify opportunities, craft proposals, and deliver presentations. - Engage in solution selling by tailoring AI/ML-based solutions to meet client requirements. - Cultivate strong client relationships and serve as a trusted advisor on their data journey. Required Skills & Experience: - Minimum of 12 years of experience, with at least 5 years in leading data science/AI teams. - Demonstrated expertise in establishing or leading data science or AI practices. - Strong technical background in AI, ML, NLP, predictive analytics, and data engineering. - Proficiency with tools and platforms such as Python, R, TensorFlow, PyTorch, Azure ML, AWS SageMaker, among others. - Sound understanding of data strategy, governance, and architecture. - Proven track record in solutioning and pre-sales engagements. - Excellent communication, leadership, and stakeholder management skills.,
Posted 1 month ago
7.0 - 10.0 years
30 - 40 Lacs
Hyderabad
Work from Office
Role & responsibilities Job Description We are seeking a highly skilled Lead Data Scientist with experience in pricing optimization, pricing elasticity, and AWS SageMaker. The ideal candidate will have a strong foundation in Statistics and Machine Learning, with a particular focus on Bayesian modelling. As part of our Data Science team, you will work closely with clients to develop advanced pricing strategies using state-of-the-art tools and techniques, including AWS SageMaker, to optimize business outcomes. Key Responsibilities: Lead and contribute to the development of pricing optimization models, leveraging statistical and machine learning techniques to inform strategic decisions. Analyze pricing elasticity to predict consumer response to changes in price, helping clients maximize revenue and market share. Implement and deploy machine learning models using AWS SageMaker for scalable and efficient performance in a cloud environment. Utilize Bayesian modelling to support decision-making processes, providing insights into uncertainty and model predictions. Collaborate with cross-functional teams to integrate data-driven insights into business processes. Communicate complex results and findings in a clear and concise manner to both technical and non-technical stakeholders. Continuously explore and experiment with new modelling approaches and tools to improve accuracy and efficiency of pricing solutions. Qualifications Experience: 7+ Years Bachelors or master’s degree in data science, Statistics Mathematics Economics, or a related field. Advanced degrees preferred. 7+ years of hands-on experience in data science, with a focus on pricing optimization and elasticity modelling. Expertise in Bayesian modelling and machine learning techniques. Proven experience working with AWS SageMaker for model development, deployment, and monitoring. Familiarity with AWS Certified Data Analytics – Specialty certification is a plus. Strong programming skills in Python (preferred) or R. Experience with cloud platforms (AWS preferred), including SageMaker. Proficiency in statistical analysis tools and libraries (e.g., NumPy, Pandas, PyMC3, or similar) Excellent problem-solving and analytical thinking skills. Ability to work in a fast-paced environment and manage multiple projects. Strong communication skills with the ability to explain complex concepts to non-technical audiences. Preferred Qualifications: Experience with A/B testing, econometrics, or other statistical experimentation methods. Familiarity with other cloud computing platforms (e.g., Azure, GCP). Experience working in cross-functional teams and client-facing roles. Additional Information Opportunity to work with cutting-edge technology in a dynamic environment. Exposure to a diverse range of industries and projects. Collaborative and inclusive work culture with opportunities for growth and professional development.
Posted 1 month ago
7.0 - 12.0 years
20 - 27 Lacs
Hyderabad
Hybrid
7+ years of professional backend web development experience with Python. Experience of AI and RAG Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow.
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The role of Data Scientist - Clinical Data Extraction & AI Integration in our healthcare technology team requires an experienced individual with 3-6 years of experience. As a Data Scientist in this role, you will be primarily focused on medical document processing and data extraction systems. You will have the opportunity to work with advanced AI technologies to create solutions that enhance the extraction of crucial information from clinical documents, thereby improving healthcare data workflows and patient care outcomes. Your key responsibilities will include designing and implementing statistical models for medical data quality assessment, developing predictive algorithms for encounter classification, and validation. You will also be responsible for building machine learning pipelines for document pattern recognition, creating data-driven insights from clinical document structures, and implementing feature engineering for medical terminology extraction. Furthermore, you will apply natural language processing (NLP) techniques to clinical text, develop statistical validation frameworks for extracted medical data, and build anomaly detection systems for medical document processing. Additionally, you will create predictive models for discharge date estimation, encounter duration, and implement clustering algorithms for provider and encounter classification. In terms of AI & LLM Integration, you will be expected to integrate and optimize Large Language Models via AWS Bedrock and API services, design and refine AI prompts for clinical content extraction with high accuracy, and implement fallback logic and error handling for AI-powered extraction systems. You will also develop pattern matching algorithms for medical terminology and create validation layers for AI-extracted medical information. Having expertise in the healthcare domain is crucial for this role. You will work closely with medical document structures, implement healthcare-specific validation rules, handle medical terminology extraction, and conduct clinical context analysis. Ensuring HIPAA compliance and adhering to data security best practices will also be part of your responsibilities. Proficiency in programming languages such as Python 3.8+, R, SQL, and JSON, along with familiarity with data science tools like pandas, numpy, scipy, scikit-learn, spaCy, and NLTK is required. Experience with ML Frameworks including TensorFlow, PyTorch, transformers, huggingface, and visualization tools like matplotlib, seaborn, plotly, Tableau, and PowerBI is desirable. Knowledge of AI Platforms such as AWS Bedrock, Anthropic Claude, OpenAI APIs, and experience with cloud services like AWS (SageMaker, S3, Lambda, Bedrock) will be advantageous. Familiarity with research tools like Jupyter notebooks, Git, Docker, and MLflow is also beneficial for this role.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
You will be working with HCL Software, a product development division of HCL Tech, to deliver software solutions that cater to the transformative needs of clients worldwide. The software developed by HCL Software spans across AI, Automation, Data & Analytics, Security, and Cloud domains, receiving accolades for its innovation and quality. Your primary focus will be on the Unica+ Marketing Platform, a product that empowers clients to execute precise and high-performance marketing campaigns across various channels such as social media, AdTech platforms, mobile applications, and websites. This platform, driven by data and AI, enables clients to create hyper-personalized offers and messages for customer acquisition, product awareness, and retention. As a Senior & Lead Python Developer specializing in Data Science and AI/ML, you are expected to leverage your 8+ years of experience in the field to deliver AI-driven marketing campaigns effectively. Your responsibilities will include Python programming, statistical analysis and modeling, data cleaning and preprocessing, SQL and database management, exploratory data analysis, machine learning algorithms, deep learning frameworks, model evaluation and optimization, and deployment of machine learning models. To excel in this role, you must possess a minimum of 8-12 years of Python development experience, with at least 4 years dedicated to data science and machine learning. Additionally, familiarity with Customer Data Platforms (CDP) like Treasure Data, Epsilon, Tealium, Adobe, Salesforce, and AWS SageMaker will be advantageous. Proficiency in integration tools and frameworks such as Postman, Swagger, and API Gateways is desired, along with expertise in REST, JSON, XML, and SOAP. A degree in Computer Science or IT is a prerequisite for this position. Excellent communication and interpersonal skills are essential, as you will be collaborating within an agile team environment. Your ability to work effectively with others and apply agile methodologies will be crucial for success. The role may require approximately 30% travel, and the preferred location is Pune, India. If you meet the qualifications and possess the necessary skills, we invite you to consider joining our dynamic team at HCL Software to contribute to cutting-edge software solutions and drive innovation in the field of data science and AI/ML.,
Posted 1 month ago
18.0 - 22.0 years
0 Lacs
haryana
On-site
At EY, you'll have the opportunity to shape a career that aligns with your unique strengths and aspirations, supported by a global network, inclusive environment, and cutting-edge technology. Your voice and perspective are valued as we strive to enhance EY's capabilities and create a more inclusive working world. Join the EY Parthenon team as the Artificial Intelligence (AI) and Generative AI (GenAI) Leader. This dynamic team focuses on delivering innovative client solutions across various industries, leveraging digital and AI technologies to drive transformation and growth. As the Executive Director of AI & GenAI at EYP, your role involves spearheading the integration of advanced AI solutions to address complex client challenges. Your responsibilities include collaborating with regional teams to identify AI opportunities, design tailored proposals, and lead client workshops to develop AI strategies aligned with business outcomes. Key responsibilities also include architecting end-to-end AI solutions, driving cross-sector innovation, ensuring ethical AI practices, and contributing to AI trends and thought leadership initiatives. To excel in this role, you should possess technical expertise in AI/GenAI lifecycle, proficiency in Python and AI frameworks, consulting acumen, and strong leadership skills. Qualifications for this position include significant experience in AI/data science projects, familiarity with Azure Cloud Framework, and expertise in statistical techniques and machine learning algorithms. Preferred qualifications include a PhD/MS/MTech/BTech in Computer Science or related field, research experience in AI applications, and strategic thinking abilities. Join us to lead AI innovation for Fortune 500 clients, collaborate with multidisciplinary experts, and accelerate your career in a culture of entrepreneurship and continuous learning. EY Global Delivery Services (GDS) offers a diverse and inclusive environment where you can collaborate with global teams and work on impactful projects across various business disciplines. You'll have access to continuous learning opportunities, transformative leadership resources, and a supportive culture that values individual contributions and fosters growth. EY is committed to building a better working world by creating long-term value for clients, promoting diversity and trust, and addressing complex global challenges through innovative solutions. Join us to be part of a team that asks better questions to find new answers and make a positive impact on the world.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an Agentic Thinker and visionary Generative AI Subject Matter Expert (SME) at UBS in Hyderabad, you will lead the design, development, and deployment of cutting-edge generative AI solutions to solve complex business challenges. UBS emphasizes innovation and agility to enhance responsiveness, adaptability, and overall innovation in the workplace. In this role, you will leverage large language models (LLMs), diffusion models, and other generative architectures to drive innovation and act as a technical authority for generative AI. Your responsibilities will include architecting, training, and fine-tuning generative AI models for various applications such as content generation, conversational AI, code automation, and data synthesis. Additionally, you will lead research and development initiatives to enhance model performance, efficiency, and alignment with business objectives. You will collaborate with product, engineering, and business teams to integrate generative AI into workflows and translate business requirements into technical specifications for generative AI use cases. Moreover, you will optimize generative models for latency, cost, and scalability, deploy models on cloud platforms and edge devices, and implement safeguards to mitigate risks such as bias and misinformation while ensuring compliance with data privacy regulations and ethical AI guidelines. As part of the Mainframe & Midrange Stream team in India, you will work on Mainframe and Midrange Gen AI Journey, aligning with UBS AI elements and the Emerging Technology and Modernization Team. Your expertise should include a master's or PhD in Computer Science, Machine Learning, or a related field, along with at least 5 years of experience in AI/ML development with a focus on generative AI. You should have a proven track record of deploying generative AI solutions in production environments, deep expertise in generative architectures such as GPT and Transformers, proficiency in Python, PyTorch/TensorFlow, and cloud platforms, as well as experience with NLP tools and vector databases. Preferred skills include knowledge of multimodal AI, reinforcement learning, and familiarity with AI ethics frameworks. UBS, as the world's largest and truly global wealth manager, operates through four business divisions and has a presence in over 50 countries. If you are passionate about driving innovation, solving complex business challenges with generative AI, and being part of a diverse and inclusive culture that values collaboration and empowerment, UBS is the place for you. Join us to grow, learn, and make a difference in a supportive and inclusive environment.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Senior Generative AI Engineer, your primary role will involve conducting original research on generative AI models. You will focus on exploring model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. It is essential to maintain a strong publication record in esteemed conferences and journals, demonstrating your valuable contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). In addition, you will be responsible for designing and experimenting with multimodal generative models that incorporate various data types such as text, images, and other modalities to enhance AI capabilities. Your expertise will be crucial in developing autonomous AI systems that exhibit agentic behavior, enabling them to make independent decisions and adapt to dynamic environments. Leading the design, development, and implementation of generative AI models and systems will be a key aspect of your role. This involves selecting suitable models, training them on extensive datasets, fine-tuning hyperparameters, and optimizing overall performance. It is imperative to have a deep understanding of the problem domain to ensure effective model development and implementation. Furthermore, you will be tasked with optimizing generative AI algorithms to enhance their efficiency, scalability, and computational performance. Techniques such as parallelization, distributed computing, and hardware acceleration will be utilized to maximize the capabilities of modern computing architectures. Managing large datasets through data preprocessing and feature engineering to extract critical information for generative AI models will also be a crucial aspect of your responsibilities. Your role will also involve evaluating the performance of generative AI models using relevant metrics and validation techniques. By conducting experiments, analyzing results, and iteratively refining models, you will work towards achieving desired performance benchmarks. Providing technical leadership and mentorship to junior team members, guiding their development in generative AI, will also be part of your responsibilities. Documenting research findings, model architectures, methodologies, and experimental results thoroughly is essential. You will prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Additionally, staying updated on the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities is crucial to foster a culture of learning and innovation within the team. Mandatory technical skills for this role include strong programming abilities in Python and familiarity with frameworks like PyTorch or TensorFlow. In-depth knowledge of Deep Learning concepts such as CNN, RNN, LSTM, Transformers LLMs (BERT, GEPT, etc.), and NLP algorithms is required. Experience with frameworks like Langgraph, CrewAI, or Autogen for developing, deploying, and evaluating AI agents is also essential. Preferred technical skills include expertise in cloud computing, particularly with Google/AWS/Azure Cloud Platform, and understanding Data Analytics Services offered by these platforms. Hands-on experience with ML platforms like GCP: Vertex AI, Azure: AI Foundry, or AWS SageMaker is desirable. Strong communication skills, the ability to work independently with minimal supervision, and a proactive approach to escalate when necessary are also key attributes for this role. If you have a Master's or PhD degree in Computer Science and 6 to 8 years of experience with a strong record of publications in top-tier conferences and journals, this role could be a great fit for you. Preference will be given to research scholars from esteemed institutions like IITs, NITs, and IIITs.,
Posted 1 month ago
4.0 - 9.0 years
9 - 30 Lacs
Bengaluru
Work from Office
- Proficiency in LLM systems, prompt fine-tuning - experience with infrastructure management, model deployment, and optimization. - Understanding of cloud architecture, performance and scalability. - Experience with machine learning frameworks Health insurance Provident fund
Posted 1 month ago
8.0 - 12.0 years
30 - 40 Lacs
Bengaluru
Work from Office
Work Location: Bangalore (in-office) We're seeking hands-on Lead Software Engineers who can: Build production-ready AI solutions on AWS Guide distributed teams while continuously scouting and prototyping emerging AI technologies You'll join our Product Engineering group, owning end-to-end delivery of next-generation Amorphic AI solutions while mentoring engineers across global teams. What is Expected: Technical Leadership Lead the integration of cutting-edge AI/LLMs into Amorphic AI Solutions, ensuring seamless interoperability and optimal performance Design and architect complex software systems with focus on scalability, maintainability, and performance Architect production-grade RAG pipelines and multi-agent orchestration on AWS (Lambda, ECS/Fargate, Bedrock, SageMaker, DynamoDB, S3, EventBridge, Step Functions) Drive the design and implementation of scalable AI pipelines Development & Innovation Design, develop, test, and maintain scalable backend applications using Python and AWS services Stay current with AI advancements through hands-on experimentation with emerging frameworks (LangChain, Hugging Face Transformers, CrewAI) via prototypes and side projects Optimize AI solution performance focusing on cost-effectiveness, latency, and resource utilization Develop strategies for monitoring, maintaining, and improving deployed AI models in production Team Leadership Lead 5-10 engineers through design reviews, pair-programming, and PR feedback Conduct code reviews and design discussions to ensure adherence to best practices Collaborate with cross-functional teams globally to identify requirements and implement solutions Create and maintain comprehensive documentation for architecture, design decisions, and coding practices Preferred Candidate Profile BE / B.Tech in Computer Science or related field 8+ years of experience in software development Solid understanding of large language models (LLMs), including experience with prompt engineering, fine-tuning, or integrating LLM APIs (e.g., from OpenAI, Anthropic, or AWS Bedrock) Hands-on experience building AI solutions using latest tools and frameworks (e.g., langchain, crewAI), demonstrated through side projects, open-source contributions, or personal prototypes Proven leadership experience in managing and mentoring high-performing teams of software and application developers Exceptional proficiency in Python programming language Solid understanding of AWS ecosystem including Lambda functions, S3 buckets, EMR clusters, DynamoDB tables etc. Proven experience in a leadership role, leading software development teams in the delivery of complex projects Deep understanding of software architecture and design principles, with a focus on building scalable and maintainable systems Experience with distributed systems, microservices architecture, and cloud-based solutions Strong knowledge of software development best practices, including code reviews, testing, and CI/CD pipelines Experience working with AWS services and developing Cloud Native Applications using REST APIs is must have Experience working in an agile delivery environment, especially product engineering teams How We'll Take Care Of You: We believe in supporting our team members both professionally and personally. Here's a look at the comprehensive benefits and perks we offer: Financial Well-being & Security Competitive Compensation : Enjoy competitive salaries and bonuses that reward your hard work and dedication Robust Insurance Coverage : Benefit from health, life, and disability insurance to ensure you and your family are protected Provident Fund Eligibility : Secure your future with eligibility for the provident fund Work-Life Balance & Flexibility Flexible Working Hours : We offer flexible working hours to help you manage your personal and professional commitments Generous Paid Time Off : Take advantage of unlimited Paid Time Off (PTO), with a mandatory minimum of 1 week per year to ensure you recharge Comprehensive Leave Policies : We provide paid vacation days, sick leave, and holidays, plus supportive parental leave (maternity, paternity, and adoption) and bereavement leave when you need it most Professional Growth & Development Learning & Development : Elevate your skills with access to extensive certification and training programs Cutting-Edge Technologies : You'll work at the forefront of innovation with cutting-edge technologies, constantly igniting your passion for continuous learning and growth Culture & Community Recognition & Rewards : Your contributions won't go unnoticed with our recognition and reward programs Engaging Activities : Connect with your colleagues through company-sponsored events, outings, team-building activities, and retreats .
Posted 1 month ago
10.0 - 20.0 years
30 - 45 Lacs
Pune
Hybrid
Role Senior Python Developer (Data Science, AI/ML) HCL Software (hcl-software.com) delivers software that fulfils the transformative needs of clients around the world. We build award winning software across AI, Automation, Data & Analytics, Security and Cloud. The HCL Unica+ Marketing Platform enables our customers to deliver precision and high performance Marketing campaigns across multiple channels like Social Media, AdTech Platforms, Mobile Applications, Websites, etc. The Unica+ Marketing Platform is a Data and AI first platform that enables our clients to deliver hyper-personalized offers and messages for customer acquisition, product awareness and retention. We are seeking a Senior Python Developer with strong Data Science and Machine Learning skills and experience to deliver AI driven Marketing Campaigns. Responsibilities Python Programming & Libraries: Proficient in Python with extensive experience using Pandas for data manipulation, NumPy for numerical operations, and Matplotlib/Seaborn for data visualization. Statistical Analysis & Modelling: Strong understanding of statistical concepts, including descriptive statistics, inferential statistics, hypothesis testing, regression analysis, and time series analysis. Data Cleaning & Preprocessing: Expertise in handling messy real-world data, including dealing with missing values, outliers, data normalization/standardization, feature engineering, and data transformation. SQL & Database Management: Ability to query and manage data efficiently from relational databases using SQL, and ideally some familiarity with NoSQL databases. Exploratory Data Analysis (EDA): Skill in visually and numerically exploring datasets to understand their characteristics, identify patterns, anomalies, and relationships. Machine Learning Algorithms: In-depth knowledge and practical experience with a wide range of ML algorithms such as linear models, tree-based models (Random Forests, Gradient Boosting), SVMs, K-means, and dimensionality reduction techniques (PCA). Deep Learning Frameworks: Proficiency with at least one major deep learning framework like TensorFlow or PyTorch. This includes understanding neural network architectures (CNNs, RNNs, Transformers) and their application to various problems. Model Evaluation & Optimization: Ability to select appropriate evaluation metrics (e.g., precision, recall, F1-score, AUC-ROC, RMSE) for different problem types, diagnose model performance issues (bias-variance trade-off), and apply optimization techniques. Deployment & MLOps Concepts: Understanding of how to deploy machine learning models into production environments, including concepts of API creation, containerization (Docker), version control for models, and monitoring. Qualifications & Skills At least 8-10 years. of Python Development Experience with at least 4 years in data science and machine learning Experience with Customer Data Platforms (CDP) like TreasureData, Epsilon, Tealium, Adobe, Salesforce is advantageous. Experience with AWS SageMaker is advantegous Experience with LangChain, RAG for Generative AI is advantageous. Expertise in Integration tools and frameworks like Postman, Swagger, API Gateways Knowledge of REST, JSON, XML, SOAP is a must Ability to work well within an agile team environment and applying the related working methods. Excellent communication & interpersonal skills A 4-year degree in Computer Science or IT is a must. Travel: 30% +/- travel required Location: India (Pune preferred) Compensation: Base salary, plus bonus
Posted 1 month ago
3.0 - 5.0 years
9 - 13 Lacs
Jaipur
Work from Office
Job Summary Were seeking a hands-on GenAI & Computer Vision Engineer with 3-5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model developmentfrom research and fine-tuning through deployment, monitoring, and iteration. In this role, youll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challengesaugmentation, domain adaptation, semi-supervised learningand mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally: Bachelors or Masters in Computer Science, Electrical Engineering, AI/ML, or a related field 3-5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges Youll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows
Posted 1 month ago
6.0 - 11.0 years
9 - 19 Lacs
Noida
Work from Office
We are looking for a skilled Machine Learning Engineer with strong expertise in Natural Language Processing (NLP) and AWS cloud services to design, develop, and deploy scalable ML models and pipelines. You will play a key role in building innovative NLP solutions for classification, forecasting, and recommendation systems, leveraging cutting-edge technologies to drive data-driven decision-making in the US healthcare domain. Key Responsibilities: Design and deploy scalable machine learning models focused on NLP tasks, classification, forecasting, and recommender systems. Build robust, end-to-end ML pipelines encompassing data ingestion, feature engineering, model training, validation, and production deployment. Apply advanced NLP techniques including sentiment analysis, named entity recognition (NER), embeddings, and document parsing to extract actionable insights from healthcare data. Utilize AWS services such as SageMaker, Lambda, Comprehend, and Bedrock for model training, deployment, monitoring, and optimization. Collaborate effectively with cross-functional teams including data scientists, software engineers, and product managers to integrate ML solutions into existing products and workflows. Implement MLOps best practices for model versioning, automated evaluation, CI/CD pipelines, and continuous improvement of deployed models. Leverage Python and ML/NLP libraries including scikit-learn, PyTorch, Hugging Face Transformers, and spaCy for daily development tasks. Research and explore advanced NLP/ML techniques such as Retrieval-Augmented Generation (RAG) pipelines, foundation model fine-tuning, and vector search methods for next-generation solutions. Required Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related technical field. 6+ years of professional experience in machine learning, with a strong focus on NLP and AWS cloud services. Hands-on experience in designing and deploying production-grade ML models and pipelines. Strong programming skills in Python and familiarity with ML/NLP frameworks like PyTorch, Hugging Face, spaCy, scikit-learn. Proven experience with AWS ML ecosystem: SageMaker, Lambda, Comprehend, Bedrock, and related services. Solid understanding of MLOps principles including version control, model monitoring, and automated deployment. Experience working in the US healthcare domain is a plus. Excellent problem-solving skills and ability to work collaboratively in an agile environment. Preferred Skills: Familiarity with advanced NLP techniques such as RAG pipelines and foundation model tuning. Knowledge of vector databases and semantic search technologies. Experience with containerization (Docker, Kubernetes) and cloud infrastructure automation. Strong communication skills with the ability to translate complex technical concepts to non-technical stakeholders.
Posted 1 month ago
12.0 - 16.0 years
45 - 55 Lacs
Pune
Hybrid
So, what’s the role all about? We are looking for an experienced Architect with a strong background in AI/ML model integration , cloud-native development using AWS Bedrock , and proficiency in .NET and Python . The ideal candidate will play a key role in designing and architecting next-generation AI-powered applications and solutions that deliver business value at scale. How will you make an impact? Design and implement scalable AI solutions leveraging AWS Bedrock , including integration of foundation models from Amazon and third-party providers. Architect and lead the development of cloud-native applications using .NET Core and Python . Collaborate with cross-functional teams including Data Science, Product, and DevOps to define technical solutions. Evaluate and fine-tune AI models (e.g., text generation, summarization, classification) and optimize inference pipelines. Define best practices for model deployment, scalability, security, and monitoring in production. Drive innovation and rapid prototyping of generative AI and RAG (Retrieval-Augmented Generation) use cases. Provide architectural oversight and mentorship to engineering teams, ensuring delivery of robust, high-performance solutions. Have you got what it takes? Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 10+ years of experience in software architecture and development. Proven expertise in AWS services , specifically AWS Bedrock , Lambda, SageMaker, and related AI/ML offerings. Strong programming skills in Python and .NET Core (C#) . Experience with AI/ML models including LLMs, embeddings, and prompt engineering. Familiarity with Vector Databases and semantic search frameworks (e.g., Pinecone, FAISS, OpenSearch). Deep understanding of RESTful APIs, microservices, and cloud architecture patterns. Strong communication and leadership skills; ability to translate business needs into technical solutions. You will have an advantage if you also have: Experience with LangChain , RAG architectures , or custom Bedrock Agents . Exposure to frontend technologies (React, Angular) is a plus. Experience working in Agile/Scrum teams and DevOps environments. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 8056 Reporting into: Tech Manager Role Type: Individual contributor
Posted 1 month ago
2.0 - 5.0 years
0 - 0 Lacs
Nagpur
Remote
Key Responsibilities: Provision and manage GPU-based EC2 instances for training and inference workloads. Configure and maintain EBS volumes and Amazon S3 buckets (versioning, lifecycle policies, multipart uploads) to handle large video and image datasets . Build, containerize, and deploy ML workloads using Docker and push images to ECR . Manage container deployment using Lambda , ECS , or AWS Batch for video inference jobs. Monitor and optimize cloud infrastructure using CloudWatch, Auto Scaling Groups , and Spot Instances to ensure cost efficiency. Set up and enforce IAM roles and permissions for secure access control across services. Collaborate with the AI/ML, annotation, and backend teams to streamline cloud-to-model pipelines. Automate cloud workflows and deployment pipelines using GitHub Actions , Jenkins , or similar CI/CD tools. Maintain logs, alerts, and system metrics for performance tuning and auditing. Required Skills: Cloud & Infrastructure: AWS Services : EC2 (GPU), S3, EBS, ECR, Lambda, Batch, CloudWatch, IAM Data Management : Large file transfer, S3 Multipart Uploads, storage lifecycle configuration, archive policies (Glacier/IA) Security & Access : IAM Policies, Roles, Access Keys, VPC (preferred) DevOps & Automation: Tools : Docker, GitHub Actions, Jenkins, Terraform (bonus) Scripting : Python, Shell scripting for automation & monitoring CI/CD : Experience in building and managing pipelines for model and API deployments ML/AI Environment Understanding: Familiarity with GPU-based ML workloads Knowledge of model training, inference architecture (batch and real-time) Experience with containerized ML model execution is a plus Preferred Qualifications: 2-5 years of experience in DevOps or Cloud Infrastructure roles AWS Associate/Professional Certification (DevOps/Architect) is a plus Experience in managing data-heavy pipelines , such as drones, surveillance, or video AI systems
Posted 1 month ago
12.0 - 17.0 years
45 - 50 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
We are seeking a highly skilled and visionary Agentic AI Architect to lead the strategic design, development, and scalable implementation of autonomous AI systems within our organization. This role demands an individual with deep expertise in cutting-edge AI architectures, a strong commitment to ethical AI practices, and a proven ability to drive innovation. The ideal candidate will architect intelligent, self-directed decision-making systems that integrate seamlessly with enterprise workflows and propel our operational efficiency forward. Key Responsibilities As an Agentic AI Architect, you will: AI Architecture and System Design: Architect and design robust, scalable, and autonomous AI systems that seamlessly integrate with enterprise workflows, cloud platforms, and advanced LLM frameworks. Define blueprints for APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Strategic AI Leadership: Provide technical leadership and strategic direction for AI initiatives focused on agentic systems. Guide cross-functional teams of AI engineers, data scientists, and developers in the adoption and implementation of advanced AI architectures. Framework and Platform Expertise: Evaluate, recommend, and implement leading AI tools and frameworks, with a strong focus on autonomous AI solutions (e.g., multi-agent frameworks, self-optimizing systems, LLM-driven decision engines). Drive the selection and utilization of cloud platforms (AWS SageMaker preferred, Azure ML, Google Cloud Vertex AI) for scalable AI deployments. Customization and Optimization: Design strategies for optimizing autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Define methodologies for fine-tuning LLMs, multi-agent frameworks, and feedback loops to align with overarching business goals and architectural principles. Innovation and Research Integration: Spearhead the integration of R&D initiatives into production architectures, advancing agentic AI capabilities. Evaluate and prototype emerging frameworks (e.g., Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems for architectural viability. Documentation and Architectural Blueprinting: Develop comprehensive technical white papers, architectural diagrams, and best practices for autonomous AI system design and deployment. Serve as a thought leader, sharing architectural insights at conferences and contributing to open-source AI communities. System Validation and Resilience: Design and oversee rigorous architectural testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation strategies, ensuring alignment with compliance, ethical and performance benchmarks for robust production systems. Stakeholder Collaboration & Advocacy: Collaborate with executives, product teams, and compliance officers to align AI architectural initiatives with strategic objectives. Advocate for AI-driven innovation and architectural best practices across the organization. Qualifications: Technical Expertise: 12+ years of progressive experience in AI/ML, with a strong track record as an AI Architect , ML Architect, or AI Solutions Lead. 7+ years specifically focused on designing and architecting autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js for architectural integrations. Extensive hands-on experience with autonomous AI tools and frameworks : LangChain, Autogen, CrewAI, or architecting custom agentic frameworks. Proficiency in cloud platforms for AI architecture : AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI, with a deep understanding of their AI service offerings. Demonstrable experience with MLOps pipelines (e.g., Kubeflow, MLflow) and designing scalable deployment strategies for AI agents in production environments. Leadership & Strategic Acumen: Proven track record of leading the architectural direction of AI/ML teams, managing complex AI projects, and mentoring senior technical staff. Strong understanding and practical application of AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and advanced bias mitigation techniques within AI architectures. Exceptional ability to translate complex technical AI concepts into clear, concise architectural plans and strategies for non-technical stakeholders and executive leadership. Ability to envision and articulate a long-term strategy for AI within the business, aligning AI initiatives with business objectives and market trends. Foster collaboration across various practices, including product management, engineering, and marketing, to ensure cohesive implementation of AI strategies that meet business goals.
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
Visionify is dedicated to leveraging the potential of Computer Vision and AI for various real-world applications. We are currently seeking a highly skilled, motivated, and enthusiastic Senior Computer Vision Engineer to play a crucial role in implementing our strategic plans. As a Senior Computer Vision Engineer at Visionify, you will be tasked with tackling cutting-edge challenges in the realm of Computer Vision by devising innovative algorithms and optimizations. The majority of our projects revolve around practical applications of Computer Vision, necessitating a strong grasp of contemporary model types such as Classification, Object detection, Object Recognition, OCR, LayoutML, and GAN networks. Proficiency in Pytorch is essential for this role, as it serves as our primary programming language. Familiarity with Azure and Azure ML Studio would be advantageous. Candidates applying for this position should remain abreast of the latest advancements and actively contribute to enhancing the Pytorch project's performance and accuracy. Your expertise in PyTorch and its underlying mechanisms will be pivotal in resolving customer challenges and offering valuable insights into product improvements. Experience in optimizing and streamlining models for deployment on edge devices, as well as converting models to NVIDIA TensorRT, will be highly valued. A strong foundation in Python programming is indispensable, given its widespread use in our organization for developing training and inference pipelines. Effective communication and presentation skills are also crucial. The ideal candidate will exhibit a deep passion for artificial intelligence and a commitment to staying updated on industry trends. **Responsibilities:** - Understanding business objectives and devising Computer Vision solutions that align with these goals, including developing training and inference frameworks and leveraging various ML technologies. - Building and optimizing Pytorch models for different runtime environments, including NVIDIA Jetson TensorRT. - Guiding the development team, addressing their queries, and facilitating the timely completion of their tasks. - Creating ML/Computer Vision algorithms to address specific challenges. - Analyzing and visualizing data to identify potential performance-affecting disparities in data distribution, especially when deploying models in real-world scenarios. - Establishing processes for core team operations, such as data acquisition, model training, and prototype development. - Identifying and utilizing open-source datasets for prototype building. - Developing pipelines for data processing, augmentation, training, inference, and active retraining. - Training models, fine-tuning hyperparameters, and devising strategies to address model errors. - Deploying models for production use. **Requirements:** - Bachelor's or Master's degree in Computer Science, Computer Engineering, IT, or a related field. - Minimum of 5 years of relevant experience; candidates with exceptional skills but less experience are encouraged to apply. - Industry experience in Image & Video Processing, including familiarity with OpenCV, GStreamer, TensorFlow, PyTorch, TensorRT, and various model training/inference techniques. - Proficiency in deep learning classification models (e.g., ResNet, Inception, VGG) and object detection models (e.g., MobileNetSSD, Yolo, FastRCNN, MaskRCNN). - Strong command of Pytorch, Torchvision, and the ability to develop training routines and update models effectively. - Familiarity with Colab, Jupyter Notebook, CUDA/GPU, and CNN visualization techniques like CAM and GradCAM. - Expertise in Computer Vision and real-time video processing methods. - Proficient in Python programming and adept at writing reusable code. - Experience with OpenCV, Scikit packages, NVIDIA platform tools (e.g., Deepstream, TensorRT), Python web frameworks (e.g., Flask, Django, FastAPI), and ML platforms (e.g., PyTorch, TensorFlow). - Knowledge of AWS SageMaker, various databases (e.g., Elasticsearch, SQL, NoSQL, Hive), cloud environments (preferably AWS) for software development, GPU-based training infrastructures, Docker, and DevOps and MLOps best practices for ML systems. **Desired Traits:** - Collaborative mindset and ability to thrive in a team environment. - Adaptability to evolving requirements. - Proclivity for innovative problem-solving. - Strong focus on work quality and developing robust code.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Vola Finance is a rapidly expanding fintech company that is transforming the landscape of financial access and management. Our cutting-edge platform empowers individuals to enhance their financial well-being and take charge of their expenditures through a range of innovative tools and solutions. With the support of top-tier investors, we are dedicated to crafting products that have a significant positive impact on the lives of our users. Our founding team comprises enthusiastic leaders with extensive backgrounds in finance and technology. Drawing upon their vast experience from leading global corporations, they are committed to cultivating a culture of creativity, teamwork, and excellence within our organization. As a member of our team, your primary responsibilities will include: - Developing churn prediction models utilizing advanced machine learning algorithms based on user transactional and behavioral data - Constructing regression models to predict users" income and balances using transaction data - Creating customer segmentation and recommendation engines for cross-selling initiatives - Building natural language processing models to gauge customer sentiment - Developing propensity models and conducting lifetime value (LTV) analysis - Establishing modern data pipelines and processing systems using AWS PAAS components like Glue and Sagemaker Studio - Utilizing API tools such as REST, Swagger, and Postman - Deploying models in the AWS environment and managing the production setup - Collaborating effectively with cross-functional teams to collect data and derive insights Essential Technical Skill Set: 1. Prior experience in Fintech product and growth strategy 2. Proficiency in Python 3. Strong grasp of linear regression, logistic regression, and tree-based machine learning algorithms 4. Sound knowledge of statistical analysis and A/B testing 5. Familiarity with AWS services such as Sagemaker, S3, EC2, and Docker 6. Experience with REST API, Swagger, and Postman 7. Proficiency in Excel 8. Competence in SQL 9. Ability to work with visualization tools like Redash or Grafana 10. Familiarity with versioning tools like Bitbucket, Github, etc.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
At PwC, the focus in data and analytics revolves around leveraging data to drive insights and make informed business decisions. By utilizing advanced analytics techniques, our team helps clients optimize operations and achieve strategic goals. As a professional in data analysis at PwC, you will specialize in utilizing advanced analytical techniques to extract insights from large datasets, supporting data-driven decision-making. Your role will involve leveraging skills in data manipulation, visualization, and statistical modeling to assist clients in solving complex business problems. PwC US - Acceleration Center is currently seeking individuals with a strong analytical background to join our Analytics Consulting practice. As a Senior Associate, you will be an essential part of business analytics teams in India, collaborating with clients and consultants in the U.S. You will lead teams for high-end analytics consulting engagements and provide business recommendations to project teams. **Years of Experience:** Candidates should possess 4+ years of hands-on experience. **Must Have:** - Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, GCPs Vertex AI platform, AWS SageMaker) - Knowledge of predictive/prescriptive analytics, particularly in the usage of Log-Log, Log-Linear, Bayesian Regression techniques, Machine Learning algorithms (Supervised and Unsupervised), deep learning algorithms, and Artificial Neural Networks - Good knowledge of statistics, including statistical tests & distributions - Experience in Data analysis, such as data cleansing, standardization, and data preparation for machine learning use cases - Experience in machine learning frameworks and tools (e.g., scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) - Advanced level programming in SQL or Python/Pyspark - Expertise with visualization tools like Tableau, PowerBI, AWS QuickSight, etc. **Nice To Have:** - Working knowledge of containerization (e.g., AWS EKS, Kubernetes), Dockers, and data pipeline orchestration (e.g., Airflow) - Good communication and presentation skills **Roles And Responsibilities:** - Develop and execute project & analysis plans under the guidance of the Project Manager - Interact with and advise consultants/clients in the U.S. as a subject matter expert to formalize data sources, acquire datasets, and clarify data & use cases for a strong understanding of data and business problems - Drive and conduct analysis using advanced analytics tools and mentor junior team members - Implement quality control measures to ensure deliverable integrity - Validate analysis outcomes and recommendations with stakeholders, including the client team - Build storylines and deliver presentations to the client team and/or PwC project leadership team - Contribute to knowledge sharing and firm building activities **Professional And Educational Background:** - Any graduate / BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA,
Posted 1 month ago
3.0 - 6.0 years
4 - 7 Lacs
Ahmedabad, Vadodara
Work from Office
AI/ML Engineer (2-3 positions) Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with a specialization in Computer Vision & Un-Supervised Learning to join our growing team. You will be responsible for building, optimizing, and deploying advanced video analytics solutions for smart surveillance applications, including real-time detection, facial recognition, and activity analysis. This role combines the core competencies of AI/ML modelling with the practical skills required to deploy and scale models in real-world production environments, both in the cloud and on edge devices. Key Responsibilities: AI/ML Development & Computer Vision Design, train, and evaluate models for: o Face detection and recognition o Object/person detection and tracking o Intrusion and anomaly detection o Human activity or pose recognition/estimation Work with models such as YOLOv8, DeepSORT, RetinaNet, Faster-RCNN, and InsightFace. Perform data preprocessing, augmentation, and annotation using tools like LabelImg, CVAT, or custom pipelines. Surveillance System Integration Integrate computer vision models with live CCTV/RTSP streams for real-time analytics. Develop components for motion detection, zone-based event alerts, person re-identification, and multi-camera coordination. Optimize solutions for low-latency inference on edge devices (Jetson Nano, Xavier, Intel Movidius, Coral TPU). Model Optimization & Deployment Convert and optimize trained models using ONNX, TensorRT, or OpenVINO for real-time inference. Build and deploy APIs using FastAPI, Flask, or TorchServe. Package applications using Docker and orchestrate deployments with Kubernetes. Automate model deployment workflows using CI/CD pipelines (GitHub Actions, Jenkins). Monitor model performance in production using Prometheus, Grafana, and log management tools. Manage model versioning, rollback strategies, and experiment tracking using MLflow or DVC. As an AI/ML Engineer, you should be well-versed of AI agent development and finetuning experience Collaboration & Documentation Work closely with backend developers, hardware engineers, and DevOps teams. Maintain clear documentation of ML pipelines, training results, and deployment practices. Stay current with emerging research and innovations in AI vision and MLOps. Required Qualifications: Bachelors or masters degree in computer science, Artificial Intelligence, Data Science, or a related field. 3-6 years of experience in AI/ML, with a strong portfolio in computer vision, Machine Learning. Hands-on experience with: o Deep learning frameworks: PyTorch, TensorFlow o Image/video processing: OpenCV, NumPy o Detection and tracking frameworks: YOLOv8, DeepSORT, RetinaNet. Solid understanding of deep learning architectures (CNNs, Transformers, Siamese Networks). Proven experience with real-time model deployment on cloud or edge environments. Strong Python programming skills and familiarity with Git, REST APIs, and DevOps tools. Preferred Qualifications: Experience with multi-camera synchronization and NVR/DVR systems. Familiarity with ONVIF protocols and camera SDKs. Experience deploying AI models on Jetson Nano/Xavier, Intel NCS2, or Coral Edge TPU. Background in face recognition systems (e.g., InsightFace, FaceNet, Dlib). Understanding of security protocols and compliance in surveillance systems. Tools & Technologies: Category Tools & Frameworks Languages & AI Python, PyTorch, TensorFlow, OpenCV, NumPy, Scikit-learn Model Serving FastAPI, Flask, TorchServe, TensorFlow Serving, REST/gRPC APIs Model Optimization ONNX, TensorRT, OpenVINO, Pruning, Quantization Deployment Docker, Kubernetes, Gunicorn, MLflow, DVC CI/CD & DevOps GitHub Actions, Jenkins, GitLab CI Cloud & Edge AWS SageMaker, Azure ML, GCP AI Platform, Jetson, Movidius, Coral TPU Monitoring Prometheus, Grafana, ELK Stack, Sentry Annotation Tools LabelImg, CVAT, Supervisely
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
55803 Jobs | Dublin
Wipro
24489 Jobs | Bengaluru
Accenture in India
19138 Jobs | Dublin 2
EY
17347 Jobs | London
Uplers
12706 Jobs | Ahmedabad
IBM
11805 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11476 Jobs | Seattle,WA
Accenture services Pvt Ltd
10903 Jobs |
Oracle
10677 Jobs | Redwood City