Home
Jobs

1339 Inference Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

0 Lacs

India

On-site

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Company Overview At Oportun, we are on a mission to foster financial inclusion for all by providing affordable and responsible lending solutions to underserved communities. As a purpose-driven financial technology company, we believe in empowering our customers with access to responsible credit that can positively transform their lives. Our relentless commitment to innovation and data-driven practices has positioned us as a leader in the industry, and we are actively seeking exceptional individuals to join our team as Senior Software Engineer to play a critical role in driving positive change. Position overview We are seeking a highly skilled Platform Engineer with expertise in building self-serve platforms that combine real-time ML deployment and advanced data engineering capabilities. This role requires a blend of cloud-native platform engineering, data pipeline development, and deployment expertise. The ideal candidate will have a strong background in implementing data workflows, building platforms to enable self-serve for ML pipelines while enabling seamless deployments. Responsibilities Platform Engineering Design and build self-serve platforms that support real-time ML deployment and robust data engineering workflows. Create APIs and backend services using Python and FastAPI to manage and monitor ML workflows and data pipelines. Real-Time ML Deployment Implement platforms for real-time ML inference using tools like AWS SageMaker and Databricks. Enable model versioning, monitoring, and lifecycle management with observability tools such as New Relic. Data Engineering Build and optimise ETL/ELT pipelines for data preprocessing, transformation, and storage using PySpark and Pandas. Develop and manage feature stores to ensure consistent, high-quality data for ML model training and deployment. Design scalable, distributed data pipelines on platforms like AWS, integrating tools such as DynamoDB, PostgreSQL, MongoDB, and MariaDB. CI/CD and Automation Use CI/CD pipelines using Jenkins, GitHub Actions, and other tools for automated deployments and testing. Automate data validation and monitoring processes to ensure high-quality and consistent data workflows. Documentation and Collaboration Create and maintain detailed technical documentation, including high-level and low-level architecture designs. Collaborate with cross-functional teams to gather requirements and deliver solutions that align with business goals. Participate in Agile processes such as sprint planning, daily standups, and retrospectives using tools like Jira. Experience Required Qualifications 5-10 years experience in IT 5-8 years experience in platform backend engineering 1 year experience in DevOps & data engineering roles. Hands-on experience with real-time ML model deployment and data engineering workflows. Technical Skills Strong expertise in Python and experience with Pandas, PySpark, and FastAPI. Proficiency in container orchestration tools such as Kubernetes (K8s) and Docker. Advanced knowledge of AWS services like SageMaker, Lambda, DynamoDB, EC2, and S3. Proven experience building and optimizing distributed data pipelines using Databricks and PySpark. Solid understanding of databases such as MongoDB, DynamoDB, MariaDB, and PostgreSQL. Proficiency with CI/CD tools like Jenkins, GitHub Actions, and related automation frameworks. Hands-on experience with observability tools like New Relic for monitoring and troubleshooting. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

AI/LLM Architect Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? Lead and participate in collaborative solutioning sessions with business stakeholders, translating business requirements and challenges into well-defined machine learning/data science use cases and comprehensive AI solution specifications. Architect robust and scalable AI solutions that enable data-driven decision-making, leveraging a deep understanding of statistical modeling, machine learning, and deep learning techniques to forecast business outcomes and optimize performance. Design and implement data integration strategies to unify and streamline diverse data sources, creating a consistent and cohesive data landscape for AI model development. Develop efficient and programmatic methods for synthesizing large volumes of data, extracting relevant features, and preparing data for AI model training and validation. Leverage advanced feature engineering techniques and quantitative methods, including statistical modeling, machine learning, deep learning, and generative AI, to implement, validate, and optimize AI models for accuracy, reliability, and performance. Simplify data presentation to help stakeholders easily grasp insights and make informed decisions. Maintain a deep understanding of the latest advancements in AI and generative AI, including various model architectures, training methodologies, and evaluation metrics. Identify opportunities to leverage generative AI to securely and ethically address business needs, optimize existing processes, and drive innovation. Contribute to project management processes, providing regular status updates, and ensuring the timely delivery of high-quality AI solutions. Primarily responsible for contributing to project delivery and maximizing business impact through effective AI solution architecture and implementation. Occasionally contribute technical expertise during pre-sales engagements and support internal operational improvements as needed. What do you bring to the table? A bachelor's or master's degree in a quantitative field (e.g., Computer Science, Statistics, Mathematics, Engineering) is required. The ideal candidate will have a strong background in designing and implementing end-to-end AI/ML pipelines, including feature engineering, model training, and inference. Experience with Generative AI pipelines is needed. 8+ years of experience in AI/ML development, with at least 3+ years in an AI architecture role. Fluency in Python and SQL and noSQL is essential. Experience with common data science libraries such as pandas and Scikit-learn, as well as deep learning frameworks like PyTorch and TensorFlow, is required. Hands-on experience with cloud-based AI/ML platforms and tools, such as AWS (SageMaker, Bedrock), GCP (Vertex AI, Gemini), Azure AI Studio, or OpenAI, is a must. This includes experience with deploying and managing models in the cloud. Our Core Values People first. We create collaborative and supportive environments by operating with respect and flexibility to promote mental, emotional and physical health. We practice empathy by treating others the way they want to be treated and assuming positive intent. We are proud of our inclusive diverse team and humble ourselves to learn about and build our connection with each other. Patient focused. We act with swift determination without sacrificing our expectations of quality . We are driven by providing exceptional solutions for our customers to positively impact patient lives. Considering what is at stake, we challenge ourselves to develop the best solution, not just the easy one. Integrity. We hold ourselves accountable and strive for transparent communication to build trust amongst ourselves and our customers. We take ownership of our results as we know what we do matters and collectively we will change the healthcare industry. We are thoughtful and intentional with every customer interaction understanding the overall impact on human health. Curious. We ask questions and actively listen in order to learn and continuously improve . We embrace change and the opportunities it presents to make each other better. We strive to be on the cutting edge of science and technology innovation by encouraging creativity. Impactful. We take our social responsibility with the seriousness it deserves and hold ourselves to a high standard. We improve our sustainability by encouraging discussion and taking action as it relates to our natural, social and economic resource footprint. We are devoted to our humanitarian mission and look for new ways to make the world a better place. Velsera is an Equal Opportunity Employer: Velsera is proud to be an equal opportunity employer committed to providing employment opportunity regardless of sex, race, creed, colour, gender, religion, marital status, domestic partner status, age, national origin or ancestry. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

About the Role: We are seeking an experienced MLOps Engineer with a strong background in NVIDIA GPU-based containerization and scalable ML infrastructure ( Contractual - Assignment Basis) . You will work closely with data scientists, ML engineers, and DevOps teams to build, deploy, and maintain robust, high-performance machine learning pipelines using NVIDIA NGC containers, Docker, Kubernetes , and modern MLOps practices. Key Responsibilities: Design, develop, and maintain end-to-end MLOps pipelines for training, validation, deployment, and monitoring of ML models. Implement GPU-accelerated workflows using NVIDIA NGC containers, CUDA, and RAPIDS . Containerize ML workloads using Docker and deploy on Kubernetes (preferably with GPU support like NVIDIA device plugin for K8s) . Integrate model versioning, reproducibility, CI/CD, and automated model retraining using tools like MLflow, DVC, Kubeflow, or similar . Optimize model deployment for inference on NVIDIA hardware using TensorRT, Triton Inference Server , or ONNX Runtime-GPU . Manage cloud/on-prem GPU infrastructure and monitor resource utilization and model performance in production. Collaborate with data scientists to transition models from research to production-ready pipelines. Required Skills: Proficiency in Python and ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Strong experience with Docker , Kubernetes , and NVIDIA GPU containerization (NGC, nvidia-docker) . Familiarity with NVIDIA Triton Inference Server , TensorRT , and CUDA . Experience with CI/CD for ML (GitHub Actions, GitLab CI, Jenkins, etc.). Deep understanding of ML lifecycle management , monitoring, and retraining. Experience working with cloud platforms (AWS/GCP/Azure) or on-prem GPU clusters. Preferred Qualifications: Experience with Kubeflow , Seldon Core , or similar orchestration tools. Exposure to Airflow , MLflow , Weights & Biases , or DVC . Knowledge of NVIDIA RAPIDS and distributed GPU workloads. MLOps certifications or NVIDIA Deep Learning Institute training (preferred but not mandatory). Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Qualcomm India Private Limited Job Area Information Technology Group, Information Technology Group > IT Software Developer General Summary Qualcomm EDAAP (Engineering Solutions and AIML) team is seeking an experienced develop and support scalable Machine learning platform. The ideal candidate will have a strong background in building and operating distributed systems, with expertise in Rust, Python, Kubernetes, and Linux. You will play a critical role in developing, supporting and debugging our Generative AI platforms. Experience 3 to 7 years of experience strong knowledge of Python or Rust, NoSQL (Mongo/Redis), working experience of developing/supporting large scale end user facing applications. Responsibilities Develop, Debug and support end to end components of large-scale Generative AI platform. Set up and operate Kubernetes clusters for efficient deployment and management of containerized applications Implement distributed microservices architecture to enable scalable and fault-tolerant inference pipelines Ensure optimal performance, security, and reliability of inference platforms, leveraging expertise in Linux, networking, servers, and data centers Develop and maintain scripts and tools for automating deployment, monitoring, and maintenance tasks Troubleshoot issues and optimize system performance, using knowledge of data structures and algorithms Work closely with users to debug issues and address performance and scalability issues. Participate in code reviews, contributing to the improvement of the overall code quality and best practices Requirements/Skills 3 to 7 years of experience in software development, with a focus on building scalable and distributed systems Proficiency in Rust and Python programming languages, with experience in developing high-performance applications Experience setting up and operating Kubernetes clusters, including deployment, scaling, and management of containerized applications Strong understanding of distributed microservices architecture and its application in large-scale systems Excellent knowledge of Linux, including shell scripting, package management, and system administration Good understanding of networking fundamentals, including protocols, architectures, and network security Familiarity with data structures and algorithms, including trade-offs and optimization techniques Experience debugging complex production issues in large scale application platforms. Experience working with cloud-native technologies, such as containers, orchestration, and service meshes Strong problem-solving skills, with the ability to debug complex issues and optimize system performance Excellent communication and collaboration skills, with experience working with cross-functional teams and customers Minimum Qualifications 3+ years of IT-relevant work experience with a Bachelor's degree in a technical field (e.g., Computer Engineering, Computer Science, Information Systems). OR 5+ years of IT-relevant work experience without a Bachelor’s degree. 3+ years of any combination of academic or work experience with Full-stack Application Development (e.g., Java, Python, JavaScript, etc.) 1+ year of any combination of academic or work experience with Data Structures, algorithms, and data stores. Develop, Debug and support end to end components of large-scale Generative AI platform. Set up and operate Kubernetes clusters for efficient deployment and management of containerized applications Implement distributed microservices architecture to enable scalable and fault-tolerant inference pipelines Ensure optimal performance, security, and reliability of inference platforms, leveraging expertise in Linux, networking, servers, and data centers Develop and maintain scripts and tools for automating deployment, monitoring, and maintenance tasks Troubleshoot issues and optimize system performance, using knowledge of data structures and algorithms Work closely with users to debug issues and address performance and scalability issues. Participate in code reviews, contributing to the improvement of the overall code quality and best practices 3 to 7 years of experience in software development, with a focus on building scalable and distributed systems Proficiency in Rust and Python programming languages, with experience in developing high-performance applications Experience setting up and operating Kubernetes clusters, including deployment, scaling, and management of containerized applications Strong understanding of distributed microservices architecture and its application in large-scale systems Excellent knowledge of Linux, including shell scripting, package management, and system administration Good understanding of networking fundamentals, including protocols, architectures, and network security Familiarity with data structures and algorithms, including trade-offs and optimization techniques Experience debugging complex production issues in large scale application platforms. Experience working with cloud-native technologies, such as containers, orchestration, and service meshes Strong problem-solving skills, with the ability to debug complex issues and optimize system performance Excellent communication and collaboration skills, with experience working with cross-functional teams and customers Bachelors (Engineering) or Masters Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3072987 Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of Weekday's clients Salary range: Rs 1000000 - Rs 1500000 (ie INR 10-15 LPA) Min Experience: 3 years Location: Bengaluru JobType: full-time Requirements About the Role We are seeking a passionate and skilled AI Engineer to join our innovative engineering team. In this role, you will play a pivotal part in designing, developing, and deploying cutting-edge artificial intelligence solutions with a focus on natural language processing (NLP) , computer vision , and machine learning models using TensorFlow and related frameworks. You will work on challenging projects that leverage large-scale data, deep learning, and advanced AI techniques, helping transform business problems into smart, automated, and scalable solutions. If you're someone who thrives in a fast-paced, tech-driven environment and loves solving real-world problems with AI, we'd love to hear from you. Key Responsibilities Design, develop, train, and deploy AI/ML models using frameworks such as TensorFlow, Keras, and PyTorch. Implement solutions across NLP, computer vision, and deep learning domains, using advanced techniques such as transformers, CNNs, LSTMs, OCR, image classification, and object detection. Collaborate closely with product managers, data scientists, and software engineers to identify use cases, define architecture, and integrate AI solutions into products. Optimize model performance for speed, accuracy, and scalability, using industry best practices in model tuning, validation, and A/B testing. Deploy AI models to cloud platforms such as AWS, GCP, and Azure, leveraging their native AI/ML services for efficient and reliable operation. Stay up to date with the latest AI research, trends, and technologies, and propose how they can be applied within the company's context. Ensure model explainability, reproducibility, and compliance with ethical AI standards. Contribute to the development of MLOps pipelines for managing model versioning, CI/CD for ML, and monitoring deployed models in production. Required Skills & Qualifications 3+ years of hands-on experience building and deploying AI/ML models in production environments. Proficiency in TensorFlow and deep learning workflows; experience with PyTorch is a plus. Strong foundation in natural language processing (e.g., NER, text classification, sentiment analysis, transformers) and computer vision (e.g., image processing, object recognition). Experience deploying and managing AI models on AWS, Google Cloud Platform (GCP), and Microsoft Azure. Skilled in Python and relevant libraries such as NumPy, Pandas, OpenCV, Scikit-learn, Hugging Face Transformers, etc. Familiarity with model deployment tools such as TensorFlow Serving, Docker, and Kubernetes. Experience working in cross-functional teams and agile environments. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or related field. Preferred Qualifications Experience with MLOps tools and pipelines (MLflow, Kubeflow, SageMaker, etc.). Knowledge of data privacy and ethical AI practices. Exposure to edge AI or real-time inference systems. Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Summary... As the Senior Data Analyst, Product Analytics, Marketplace Analytics & Data Science, you will be part of the team with an aim to evaluate the effectiveness and efficiency of the Marketplace platform. Your focus will be to support the Product team as they prioritize and build capabilities and tools to allow Sellers to both sell and ship products to Customers. What you'll do... About Team As the Senior Data Analyst, Product Analytics, Marketplace Analytics ; Data Science, you will be part of the team with an aim to evaluate the effectiveness and efficiency of the Marketplace platform. Your focus will be to support the Product team as they prioritize and build capabilities and tools to allow Sellers to both sell and ship products to Customers. You will be responsible for building out a holistic view of performance as well as a detailed analytics roadmap across the entire Product Lifecycle by leveraging state of the art analytics tools (i.e. SQL/Hive/Hadoop/Cloud, Mixpanel, Quantum Metrics, Tableau/ThoughtSpot/Looker, etc.). Key projects include supporting Product Discovery, providing business impact Sizing, assessing the performance of Product Features, and understanding Seller Behavior through Clickstream Analytics. This role is highly visible since you must work cross functionally with Product, Engineering, Business, and Operations. Given the size and scale of Walmart and the advanced capabilities being built by Product for Marketplace this position will have a significant impact. What You Will Do Serve as thought leader and trusted advisor to Product leadership and the larger product management team by collaborating with them through the entire product lifecycle from discovery, A/B testing, to post launch insights and learnings. Demonstrate proactive, solution-oriented thought leadership -- you are always looking ahead to whats coming next, you solve problems within and outside of your core area, and you are comfortable actively influencing leaders to ensure you drive key decisions and priorities. Proactively identify product opportunities within and beyond product ownerships areas through a hypothesis driven culture and data driven deep dives Responsible for organizing and assembling the resources, technology, and processes to support the Product Analytics needs ofthe Marketplace Product teams. Successfully work with cross functional group consisting of Product, Engineering and Business to drive data based decisions. Interface with product ; business stakeholders across geographies to proactively identify opportunities, develop business acumen, cultivate stakeholder relationships ; develop best in class data analytics solutions. Define the product engagement data capture strategy and collaborate with Engineering to ensure the accuracy of the data. Have an strong understanding of various data sources and how to organize and utilize them to deliver critical insights to the broader organization. Leverage clickstream data to identify opportunities for improving customer experience and influence product roadmaps. Perform conversion analysis, funnel analysis, impact sizing to influence decision-making. Extensive hands-on experience with SQL to query from different databases Define and monitor KPIs to measure product performance and monitoring product health. Create effective reporting and dashboards by applying expertise in data visualization tools such as Mixpanel, Quantum Metric, Looker/Tableau, and Splunk to monitor product performance. Design and execute A/B tests, observational inference, predictive analytics to identify and quantify the impact of new product feature as an ongoing discipline to constantly improve product features and provide better experiences for customers across all platforms such as Desktop, Mobile, etc. What You Will Bring MBA or Masters Degree in Mathematics, Engineering, Statistics or a related technical field 4-9 years of experience in data analysis or an analytical capacity 3+ years of experience in Product Analytics, Digital Analytics, or eCommerce Analytics Proficient SQL programming skills with an understanding of database capabilities and experience of integrating, structuring, and analyzing large amounts of data from diverse sources. Design A/B tests to test and quantify the impact of new product feature. Drive A/B testing as an ongoing discipline to constantly improve product features and provide better experiences for customers across all platforms (e.g., Desktop, Mobile, etc.). Experience leveraging big data technologies (Hive/Hadoop) and modern data visualization tools (Tableau, ThoughtSpot, Looker) to blend data from multiple sources to help answer multi-dimensional business questions Expert level understanding of Microsoft Office suite especially Excel and PowerPoint Strong analytical and quantitative skills and ability to synthesize findings into tangible actions that help drive business outcomes Strong organizational skills, a strong sense of ownership and accountability, and the ability to lead projects, communicate effectively, and be a self-starter. You can communicate technical material to a range of audiences and to tell a story that provides insight into the business You embrace tackling complex problems with a high degree of ambiguity. PREFERRED QUALIFICATIONS: Background in Product Analytics, ideally experience with two-sided businesses (Buyer ; Seller) like a Marketplace (eBay), Rideshare (Uber), other sharing business models Experience with A/B and Multivariate test design and implementation and Regression modelling Retail and/or eCommerce industry experience in a heavily data-driven environment preferred Working knowledge of Digital Product Analytics methodologies. Preference will be given to candidates with experience in both B2B and B2C digital products Experience using an enterprise-level product analytics platforms (e.g. Mixpanel, Quantum Metric, Splunk, etc.) You have a passion for working in a fast-paced agile environment About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. Thats what we do at Walmart Global Tech. Were a team of software engineers, data scientists, cybersecurity experts and service professionals within the worlds leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone isand feelsincluded, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, were able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor's degree in Business, Engineering, Statistics, Economics, Analytics, Mathematics, Arts, Finance or related field and 2 years' experience in data analysis, data science, statistics, or related field. Option 2: Master's degree in Business, Engineering, Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field. Option 3: 4 years' experience in data analysis, data science, statistics, or related field. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Primary Location... G, 1, 3, 4, 5 Floor, Building 11, Sez, Cessna Business Park, Kadubeesanahalli Village, Varthur Hobli , India R-2106424 Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

Flexera saves customers billions of dollars in wasted technology spend. A pioneer in Hybrid ITAM and FinOps, Flexera provides award-winning, data-oriented SaaS solutions for technology value optimization (TVO), enabling IT, finance, procurement and cloud teams to gain deep insights into cost optimization, compliance and risks for each business service. Flexera One solutions are built on a set of definitive customer, supplier and industry data, powered by our Technology Intelligence Platform, that enables organizations to visualize their Enterprise Technology Blueprint™ in hybrid environments—from on-premises to SaaS to containers to cloud. We’re transforming the software industry. We’re Flexera. With more than 50,000 customers across the world, we’re achieving that goal. But we know we can’t do any of that without our team. Ready to help us re-imagine the industry during a time of substantial growth and ambitious plans? Come and see why we’re consistently recognized by Gartner, Forrester and IDC as a category leader in the marketplace. Learn more at flexera.com Job Summary: We are seeking a skilled and motivated Senior Data Engineer to join our Automation, AI/ML team. In this role, you will work on designing, building, and maintaining data pipelines and infrastructure to support AI/ML initiatives, while contributing to the automation of key processes. This position requires expertise in data engineering, cloud technologies, and database systems, with a strong emphasis on scalability, performance, and innovation. Key Responsibilities: Identify and automate manual processes to improve efficiency and reduce operational overhead. Design, develop, and optimize scalable data pipelines to integrate data from multiple sources, including Oracle and SQL Server databases. Collaborate with data scientists and AI/ML engineers to ensure efficient access to high-quality data for training and inference models. Implement automation solutions for data ingestion, processing, and integration using modern tools and frameworks. Monitor, troubleshoot, and enhance data workflows to ensure performance, reliability, and scalability. Apply advanced data transformation techniques, including ETL/ELT processes, to prepare data for AI/ML use cases. Develop solutions to optimize storage and compute costs while ensuring data security and compliance. Required Skills and Qualifications: Experience in identifying, streamlining, and automating repetitive or manual processes. Proven experience as a Data Engineer, working with large-scale database systems (e.g., Oracle, SQL Server) and cloud platforms (AWS, Azure, Google Cloud). Expertise in building and maintaining data pipelines using tools like Apache Airflow, Talend, or Azure Data Factory. Strong programming skills in Python, Scala, or Java for data processing and automation tasks. Experience with data warehousing technologies such as Snowflake, Redshift, or Azure Synapse. Proficiency in SQL for data extraction, transformation, and analysis. Familiarity with tools such as Databricks, MLflow, or H2O.ai for integrating data engineering with AI/ML workflows. Experience with DevOps practices and tools, such as Jenkins, GitLab CI/CD, Docker, and Kubernetes. Knowledge of AI/ML concepts and their integration into data workflows. Strong problem-solving skills and attention to detail. Preferred Qualifications: Knowledge of security best practices, including data encryption and access control. Familiarity with big data technologies like Hadoop, Spark, or Kafka. Exposure to Databricks for data engineering and advanced analytics workflows. Flexera is proud to be an equal opportunity employer. Qualified applicants will be considered for open roles regardless of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by local/national laws, policies and/or regulations. Flexera understands the value that results from employing a diverse, equitable, and inclusive workforce. We recognize that equity necessitates acknowledging past exclusion and that inclusion requires intentional effort. Our DEI (Diversity, Equity, and Inclusion) council is the driving force behind our commitment to championing policies and practices that foster a welcoming environment for all. We encourage candidates requiring accommodations to please let us know by emailing careers@flexera.com. Show more Show less

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

Sukhlia, Indore, Madhya Pradesh

Remote

Job Title: AWS & DevOps Engineer Department: DevOps Location: Indore Job Type: Full-time Experience: 3-5 years Notice Period: 0-15 days (immediate joiners preferred) Work Arrangement: On-site (Work from Office) Advantal Technologies is looking for a skilled AWS & DevOps Engineer to help build and manage the cloud infrastructure. This role involves designing scalable infrastructure, automating deployments, enforcing security, and supporting a hybrid (AWS + open-source) deployment strategy. Key Responsibilities: AWS Cloud Infrastructure: · Design, provision, and manage secure and scalable cloud architecture on AWS. · Configure and manage core services: VPC, EC2, S3, RDS (PostgreSQL), Lambda, CloudFront, Cognito, and IAM. · Deploy AI models using Amazon SageMaker for inference at scale. · Manage API integrations via Amazon API Gateway and AWS WAF. DevOps & Automation: · Implement CI/CD pipelines using AWS CodePipeline, GitHub Actions, or GitLab CI. · Containerize backend applications using Docker and orchestrate with AWS ECS/Fargate or Kubernetes (for on-prem/hybrid). · Use Terraform or AWS CloudFormation for Infrastructure as Code (IaC). · Monitor applications using CloudWatch, Security Hub, and CloudTrail. Security & Compliance: · Implement IAM policies and KMS key management, and enforce Zero Trust architecture. · Configure S3 object lock, audit logs, and data classification controls. · Support GDPR/HIPAA-ready compliance setup via AWS Config, GuardDuty, and Security Hub. Required Skills & Experience: Must-Have · 3–5 years of hands-on experience in AWS infrastructure and services. · Proficiency with Terraform, CloudFormation, or other IaC tools. · Experience with Docker, CI/CD pipelines, and cloud networking (VPC, NAT, Route 53). · Strong understanding of DevSecOps principles and AWS security best practices. · Experience supporting production-grade SaaS applications. Nice-to-Have: · Exposure to AI/ML model deployment (especially via SageMaker or containerized APIs). · Knowledge of multi-tenant SaaS infrastructure patterns. · Experience with Vault, Keycloak, or open-source IAM/security stacks for non-AWS environments. · Familiarity with Kubernetes (EKS or self-hosted). Tools & Stack You'll Use: · AWS (Lambda, RDS, S3, SageMaker, Cognito, CloudFront, CloudWatch, API Gateway) · Terraform, Docker, GitHub Actions · CI/CD: GitHub, GitLab, AWS CodePipeline · Monitoring: CloudWatch, GuardDuty, Prometheus (non-AWS) · Security: KMS, IAM, Vault Please share resume to hr@advantal.net Job Types: Full-time, Permanent Pay: ₹261,624.08 - ₹1,126,628.25 per year Benefits: Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Ability to commute/relocate: Sukhlia, Indore, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred) Experience: AWS DevOps: 3 years (Required) Work Location: In person Speak with the employer +91 9131295441 Expected Start Date: 02/06/2025

Posted 1 month ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We’re on the lookout for a Data Science Manager with deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), and Generative AI to lead a high-impact Conversational AI initiative for one of our premier EMEA-based clients. You’ll not only guide a team of data scientists and ML engineers but also work hands-on to build cutting-edge systems for real-time transcription, sentiment analysis, summarization, and intelligent decision-making . Your solutions will enable smarter engagement strategies, unlock valuable insights, and directly impact client success. What You'll Do: Strategic Leadership & Delivery: Lead the end-to-end delivery of AI solutions for transcription and conversation analytics. Collaborate with client stakeholders to understand business problems and translate them into AI strategies. Provide mentorship to team members, foster best practices, and ensure high-quality technical delivery. Conversational AI Development: Oversee development and tuning of ASR models using tools like Whisper, DeepSpeech, Kaldi, AWS/GCP STT. Guide implementation of speaker diarization for multi-speaker conversations. Ensure solutions are domain-tuned and accurate in real-world conditions. Generative AI & NLP Applications: Architect LLM-based pipelines for summarization, topic extraction, and conversation analytics. Design and implement custom RAG pipelines to enrich conversational insights using external knowledge bases. Apply prompt engineering and NER techniques for context-aware interactions. Decision Intelligence & Sentiment Analysis: Drive the development of models for sentiment detection, intent classification , and predictive recommendations . Enable intelligent workflows that suggest next-best actions and enhance customer experiences. AI at Scale: Oversee deployment pipelines using Docker, Kubernetes, FastAPI , and cloud-native tools (AWS/GCP/Azure AI). Champion cost-effective model serving using ONNX, TensorRT, or Triton. Implement and monitor MLOps workflows to support continuous learning and model evolution. What You'll Bring to the Table: Technical Excellence 8+ Years of proven experience leading teams in Speech-to-Text, NLP, LLMs, and Conversational AI domains. Strong Python skills and experience with PyTorch, TensorFlow, Hugging Face, LangChain . Deep understanding of RAG architectures , vector DBs (FAISS, Pinecone, Weaviate), and cloud deployment practices. Hands-on experience with real-time applications and inference optimization. Leadership & Communication Ability to balance strategic thinking with hands-on execution. Strong mentorship and team management skills. Exceptional communication and stakeholder engagement capabilities. A passion for transforming business needs into scalable AI systems. Bonus Points For: Experience in healthcare, pharma, or life sciences conversational use cases. Exposure to knowledge graphs, RLHF , or multimodal AI . Demonstrated impact through cross-functional leadership and client-facing solutioning. What do you get in return? Competitive Salary: Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks : Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future. Growth Chats : Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills. Snack Zone: Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing. Recognition & Rewards : We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program. Fuel Your Growth Journey with Certifications: We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certifications . Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

In Norconsulting we are currently looking for an AI developer to join us in Chennai in a freelancer opportunity for a major Banking organization. Duration : long term Location : Chennai Rate : 110 USD/day (around 2200 USD per month) Type of assignment: Full-time (8h/day, Monday to Friday) SKILLS / EXPERIENCE REQUIRED AI Developer • Large Language Models (LLMs) & Prompt Engineering: Experience working with transformer-based models (e.g., GPT, BERT) and crafting effective prompts for tasks like summarization, text classification and document understanding. • Azure Document Intelligence: Hands-on experience with Azure AI Document Intelligence for extracting structured data from unstructured documents (invoices, forms, contracts). • Model Development & Evaluation: Strong foundation in ML algorithms, model evaluation metrics, and hyperparameter tuning using tools like Scikit-learn, XGBoost, or PyTorch. • MLOps (Machine Learning Operations): Proficient in building and managing ML pipelines using Azure ML, MLflow, and CI/CD tools for model training, deployment, and monitoring. • Azure Machine Learning (Azure ML): Experience with Azure ML Studio, automated ML, model registry, and deployment to endpoints or containers. • Azure Functions & Serverless AI: Building event-driven AI workflows using Azure Functions for real-time inference, data processing, and integration with other Azure services. • Programming Languages: Strong coding skills in Python (preferred), with knowledge of libraries like NumPy, Pandas, Scikit-learn, and Matplotlib. • Database & Data Lakes: Experience with SQL and NoSQL databases, and integration with data lakes for AI pipelines. • DevOps & Git Integration: Experience with Azure DevOps for version control, testing, and continuous integration of AI workflows. WBGJP00012309 Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

India

Remote

ob Title: AI Full stack Developer – GenAI & NLP Location: Pune, India (Hybrid) Work Mode: Remote Experience Required: 2+ Years (Relevant AI/ML with GenAI & NLP) Salary: Up to ₹15 LPA (CTC) Employment Type: Full-time Department: AI Research & Development Role Overview We are looking for a passionate AI Developer with strong hands-on experience in Generative AI and Natural Language Processing (NLP) to help build intelligent and scalable solutions. In this role, you will design and deploy advanced AI models for tasks such as language generation, summarization, chatbot development, document analysis, and more. You’ll work with cutting-edge LLMs (Large Language Models) and contribute to impactful AI initiatives. Key Responsibilities Design, fine-tune, and deploy NLP and GenAI models using LLMs like GPT, BERT, LLaMA, or similar. Build applications for tasks like text generation, question-answering, summarization, sentiment analysis, and semantic search. Integrate language models into production systems using RESTful APIs or cloud services. Evaluate and optimize models for accuracy, latency, and cost. Collaborate with product and engineering teams to implement intelligent user-facing features. Preprocess and annotate text data, create custom datasets, and manage model pipelines. Stay updated on the latest advancements in generative AI, transformer models, and NLP frameworks. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, AI/ML, or a related field. Minimum 2 years of experience in fullstack development and AI/ML development, with recent work in NLP or Generative AI. Hands-on experience with models such as GPT, T5, BERT, or similar transformer-based architectures. Proficient in Python and libraries such as Hugging Face Transformers, spaCy, NLTK, or OpenAI APIs. Hands-on experience in any frontend/ backend technologies for software development. Experience with deploying models using Flask, FastAPI, or similar frameworks. Strong understanding of NLP tasks, embeddings, vector databases (e.g., FAISS, Pinecone), and prompt engineering. Familiarity with MLOps tools and cloud platforms (AWS, Azure, or GCP). Preferred Qualifications Experience with LangChain, RAG (Retrieval-Augmented Generation), or custom LLM fine-tuning. Knowledge of model compression, quantization, or inference optimization. Exposure to ethical AI, model interpretability, and data privacy practices. What We Offer Competitive salary package up to ₹15 LPA. Remote work flexibility with hybrid team collaboration in Pune. Opportunity to work on real-world generative AI and NLP applications. Access to resources for continuous learning and certification support. Inclusive, fast-paced, and innovative work culture. Skills: nltk,computer vision,inference optimization,model interpretability,gpt,bert,mlops,artificial intelligence,next.js,tensorflow,ai development,machine learning,generative ai,ml,openai,node.js,kubernetes,large language models (llms),openai apis,natural language processing,machine learning (ml),fastapi,natural language processing (nlp),java,azure,nlp tasks,model compression,embeddings,vector databases,aws,typescript,r,hugging face transformers,google cloud,hugging face,llama,ai tools,mlops tools,rag architectures,langchain,spacy,docker,retrieval-augmented generation (rag),pytorch,gcp,cloud,large language models,react.js,deep learning,python,ai technologies,flask,ci/cd,data privacy,django,quantization,javascript,ethical ai,nlp Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Experience : 3+ years Job Location : Bengaluru, Karnataka Work Modality : Fulltime work from office Job Description : To develop LLM-driven products from the ground up. We are looking for enthusiastic members who would like to design cutting-edge systems and implement AI solutions that scale globally. Strong communication skills Problem-solving abilities Strong programming background Understanding of Transformer architecture Required Qualifications : 3+ years of hands-on experience in AI/ML, with proven projects using Transformers (e.g., BERT, GPT, T5, ViTs, Small LLMs) Strong proficiency in Python and deep learning frameworks (PyTorch or TensorFlow) Ability to independently analyze open sources and code repositories Experience in fine-tuning Transformer models for NLP (e.g., text classification, summarization) or Computer Vision (e.g., image generation, recognition) Knowledge of GPU acceleration, optimization techniques, and model quantization Experience in deploying models using Flask, FastAPI, or cloud-based inference services Familiarity with data pre-processing, feature engineering, and training workflows

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role - Senior AI Engineer Experience: 3+ years in AI/ML/Data Science Location: Gurgaon, work from office About Tap Health: Tap Health is a deep-tech startup transforming chronic care with AI and changing how people access health information. We build next-generation, AI-driven digital therapeutics for diabetes, PCOS, hypertension, asthma, pregnancy, obesity and more, eliminating the need for human support while significantly reducing costs, improving engagement and boosting outcomes. Tap Health's fully autonomous digital therapeutic for diabetes simplifies management by delivering real-time, daily guidance to optimize health outcomes at less than 10% of the cost of legacy products. Powered by adaptive AI and clinical protocols, it dynamically personalizes each user’s care journey, delivering tailored insights, lifestyle interventions, motivational nudges, adherence support, and improved clinical outcomes. Beyond digital therapeutics, Tap Health’s Health Assistant assists users in primary symptom diagnosis based on their inputs and provides instant health advice through a seamless, voice-first experience. www.tap.health Role Overview: Lead AI Engineer We are hiring a Senior AI Engineer in Gurgaon to drive AI-driven healthcare innovations. The ideal candidate has 3+ years of AI/ML experience, 1+ year of GenAI production experience, and 1+ year of hands-on GenAI product development. You need to have a strong data science background and have expertise in GenAI, Agentic AI deployments, causal inference, and Bayesian modelling, with a strong foundation in LLMs and traditional models. You will be collaborating with the AI, Engineering, and Product teams to build scalable, consumer-focused healthcare solutions. As a Lead AI Engineer, you will be the go-to expert—the engineer others turn to when they hit roadblocks. You will mentor, collaborate, and enable high product velocity while fostering a culture of continuous learning and innovation. Skills & Experience The ideal candidate should have the following qualities: Strong understanding of fine-tuning, optimisation, and neural architectures. Hands-on experience with Python, PyTorch, and FastAI frameworks. Experience running production workloads on one or more hyperscalers (AWS, GCP, Azure, Oracle, DigitalOcean, etc.). In-depth knowledge of LLMs—how they work and their limitations. Ability to assess the advantages of fine-tuning, including dataset selection strategies. Understanding of Agentic AI frameworks, MCPs (Multi-Component Prompting), ACP (Adaptive Control Policies), and autonomous workflows. Familiarity with evaluation metrics for fine-tuned models and industry-specific public benchmarking standards in healthcare. Knowledge of advanced statistical models, reinforcement learning, and Bayesian inference methods. Experience in Causal Inference and Experimentation Science to improve product and marketing outcomes. Proficiency in querying and analysing diverse datasets from multiple sources to build custom ML and optimisation models. Comfortable with code reviews and standard coding practices using Python, Git, Cursor, and CodeRabbit. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role: Lead AI Engineer Experience: 3+ years in AI/ML/Data Science Location: Gurgaon, work from office About Tap Health: Tap Health is a deep-tech startup transforming chronic care with AI and changing how people access health information. We build next-generation, AI-driven digital therapeutics for diabetes, PCOS, hypertension, asthma, pregnancy, obesity and more, eliminating the need for human support while significantly reducing costs, improving engagement and boosting outcomes. Tap Health's fully autonomous digital therapeutic for diabetes simplifies management by delivering real-time, daily guidance to optimise health outcomes at less than 10% of the cost of legacy products. Powered by adaptive AI and clinical protocols, it dynamically personalises each user’s care journey, delivering tailored insights, lifestyle interventions, motivational nudges, adherence support, and improved clinical outcomes. Beyond digital therapeutics, Tap Health’s Health Assistant assists users in primary symptom diagnosis based on their inputs and provides instant health advice through a seamless, voice-first experience. www.tap.health Role Overview: Lead AI Engineer - 3+ yrs exp [AI Healthcare startup] We are hiring a Lead AI Engineer in Gurgaon to drive AI-driven healthcare innovations. The ideal candidate has 3+ years of AI/ML/Data Science experience with 1+ months of GenAI production experience, and 1+ year of hands-on GenAI product development. You need to have expertise in Agentic AI deployments, causal inference, and Bayesian modelling, with a strong foundation in LLMs and traditional models. You will lead and collaborate with the AI, Engineering, and Product teams to build scalable, consumer-focused healthcare solutions. As an AI leader, you will be the go-to expert—the engineer others turn to when they hit roadblocks. You will mentor, collaborate and enable high product velocity while fostering a culture of continuous learning and innovation. Skills & Experience The ideal candidate should have the following qualities: Over 8 years of experience in AI/ML/Data Science Strong understanding of fine-tuning, optimization, and neural architectures. Hands-on experience with Python, PyTorch, and FastAI frameworks. Experience running production workloads on one or more hyperscalers (AWS, GCP, Azure, Oracle, DigitalOcean, etc.). In-depth knowledge of LLMs—how they work and their limitations. Ability to assess the advantages of fine-tuning, including dataset selection strategies. Understanding of Agentic AI frameworks, MCPs (Multi-Component Prompting), ACP (Adaptive Control Policies), and autonomous workflows. Familiarity with evaluation metrics for fine-tuned models and industry-specific public benchmarking standards in the healthcare domain. Knowledge of advanced statistical models, reinforcement learning, and Bayesian inference methods. Experience in Causal Inference and Experimental Science to improve product and marketing outcomes. Proficiency in querying and analyzing diverse datasets from multiple sources to build custom ML and optimization models. Comfortable with code reviews and standard coding practices using Python, Git, Cursor, and CodeRabbit. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Applied Research Center [Emerging Areas] Advanced AI [SLM, Inference Scaling, Synthetic Data, Distributed Learning, Agentic AI, ANI] New Interaction Models [Spatial computing, Mixed Reality, 3D visualizations, New Experiences] Platforms and Protocols [Architecting and engineering for Performance, Uptime, Low-latency, Scalability, Efficiency, Data, Interoperability and Low cost, Beckn, CDPI] Cybersecurity [Ethical hacking, Threat Mgmt, Supply chain security & risk, Cyber Resilience] Quantum [Quantum AI, Stack, Simulation & Optimization, Cryptography, Valued use cases] Autonomous Machines [Humanoids, Industrial Robots, Drones, Smart Products] Emerging Research [Brain, AGI, Space, Semicon ] Emerging Tech Trends Research - Research on emerging tech trends, ecosystem of players, use cases and their applicability and impact to client businesses. Scan & curate startups, universities and tech partnerships needed and create innovation ecosystem. Rapidly design and develop PoCs in Emerging tech areas. Share design specifications with other team members, get the components developed, integrate and test. Build reusable components and develop PoCs using relevant startups and Open-source solutions. 2. Thought Leadership - Develop showcases that demonstrate how emerging technologies can be applied in a business context, demo scenarios for the IP. Contribute towards patents, tier-1 publications, whitepapers, blogs in the relevant emerging tech area Get certified on the emerging technology, frameworks 3. Applied Research Center Activities - Contribute to high level design development, testing and implementation of new proof of concepts in emerging tech areas. 4. Problem Definition, Requirements - Understand technical requirements and define detailed design. Analyze the reusable components to map the given requirement to existing implementation and identify needs for enhancements 5. IP Development - Develop program level design, modular components to implement the proposed design. Design and develop reusable components. Ensure compliance with coding standards, secure coding, KM guidelines while developing the IP 6. Innovation Consulting - Understand client requirements and implement first of kind solutions using emerging tech expertise. Customize and extend IP for client specific features 7. Talent Management - Mentor the team and help them acquire the identified emerging tech skill. Participate in demo sessions, hackathons 8. Emerging Tech Startup Ecosystem – Work with startups in providing innovative solutions to client problems and augmenting Infosys offerings Technical Competencies Advanced theoretical knowledge in specific domain Experimental design and methodology expertise Data analysis and interpretation skills Prototype development capabilities Research tool proficiency relevant to domain Soft Skills and Attributes Collaborative mindset for cross-disciplinary research Communication skills for knowledge dissemination Creative problem-solving approach Intellectual curiosity and innovation focus Commercial awareness for translational research Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Who We Are Applied Materials is the global leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. We design, build and service cutting-edge equipment that helps our customers manufacture display and semiconductor chips – the brains of devices we use every day. As the foundation of the global electronics industry, Applied enables the exciting technologies that literally connect our world – like AI and IoT. If you want to work beyond the cutting-edge, continuously pushing the boundaries of science and engineering to make possible the next generations of technology, join us to Make Possible® a Better Future. What We Offer Location: Bangalore,IND At Applied, we prioritize the well-being of you and your family and encourage you to bring your best self to work. Your happiness, health, and resiliency are at the core of our benefits and wellness programs. Our robust total rewards package makes it easier to take care of your whole self and your whole family. We’re committed to providing programs and support that encourage personal and professional growth and care for you at work, at home, or wherever you may go. Learn more about our benefits. You’ll also benefit from a supportive work culture that encourages you to learn, develop and grow your career as you take on challenges and drive innovative solutions for our customers. We empower our team to push the boundaries of what is possible—while learning every day in a supportive leading global company. Visit our Careers website to learn more about careers at Applied. Who We Are Applied Materials is the global leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. We design, build and service cutting-edge equipment that helps our customers manufacture display and semiconductor chips – the brains of devices we use every day. As the foundation of the global electronics industry, Applied enables the exciting technologies that literally connect our world – like AI and IoT. If you want to work beyond the cutting-edge, continuously pushing the boundaries of science and engineering to make possible the next generations of technology, join us to Make Possible ® a Better Future. At Applied, we prioritize your well-being and encourage you to bring your best self to work. Your happiness, health, and resiliency are at the core of our benefits and wellness programs. Our robust total rewards package makes it easier to take care of your whole self and your whole family. We’re committed to providing programs and support that encourage personal and professional growth and care for you at work, at home, or wherever you may go. Applied Materials’ Applied AI Systems Solutions (System to Materials) Business Unit is searching for a Software Engineer – AI Performance Architect to join our team! The Applied AI System to Materials team works on architecting differentiated AI Systems leveraging Applied’s fundamental innovations. Write the details of role here: Benchmark AI workloads (LLMs) in single and multi-node High Performance GPU configurations. Project and Analyze systems performance for LLMs using various parallelization techniques. Develop methodologies to measure key performance metrics and understand bottlenecks to improve efficiency. Requirements Understanding of transformer-based model architectures and basic GEMM operations. Strong programming skills in Python, C/C++. Proficiency in systems (CPU, GPU, Memory, or Network) architecture analysis and performance modelling. Experience with parallel computing architectures, interconnect fabrics and AI workloads (Finetuning/Inference). Experience with DL Frameworks (Pytorch, Tensorflow), Profiling tools (Nsight Systems, Nsight Compute, Rocprof), Containerized Environment (Docker) Applied Materials is committed to diversity in its workforce including Equal Employment Opportunity for Minorities, Females, Protected Veterans and Individuals with Disabilities. Additional Information Time Type: Full time Employee Type: Assignee / Regular Travel: Not Specified Relocation Eligible: Yes Applied Materials is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, ancestry, religion, creed, sex, sexual orientation, gender identity, age, disability, veteran or military status, or any other basis prohibited by law. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description UAE-based ZySec AI provides cutting-edge cybersecurity solutions to help enterprises tackle evolving security challenges at scale. Utilizing an autonomous AI workforce, ZySec AI enhances operational efficiency by automating repetitive, resource-intensive tasks, enabling security teams to focus on strategic priorities. Our mission is to make AI more efficient, accessible, and private for security professionals.mWe're building the future of Autonomous Data Intelligence at CyberPod AI and were looking for a deeply technical, hands-on AI Engineer to push the boundaries of whats possible with Large Language Models (LLMs). This role is for someone whos already been in the trenches: fine-tuned foundation models, experimented with quantization and performance tuning, and knows PyTorch inside out. If youre passionate about optimizing LLMs, crafting efficient reasoning architectures, and contributing to open-source communities like Hugging Face, this is your playground. Role Description Fine-tune Large Language Models (LLMs) on custom datasets for specialized reasoning tasks. Design and run benchmarking pipelines across accuracy, speed, token throughput, and energy efficiency. Implement quantization, pruning, and distillation techniques for model compression and deployment readiness. Evaluate and extend agentic RAG (Retrieval-Augmented Generation) pipelines and reasoning agents. Contribute to SOTA model architectures for multi-hop, temporal, and multimodal reasoning. Collaborate closely with the data engineering, infra, and applied research teams to bring ideas from paper to production. Own and drive experiments, ablations, and performance dashboards end-to-end. Requirements Hands-on experience working with deep learning and large models, particularly LLMs. Strong understanding of PyTorch internals: autograd, memory profiling, efficient dataloaders, mixed precision. Proven track record in fine-tuning LLMs (e.g., LLaMA, Falcon, Mistral, Open LLaMA, T5, etc.) on real-world use cases. Benchmarking skills: can run standardized evals (e.g., MMLU, GSM8K, HELM, TruthfulQA) and interpret metrics. Deep familiarity with quantization techniques: GPTQ, AWQ, QLoRA, bitsandbytes, and low-bit inference. Working knowledge of Hugging Face ecosystem (Transformers, Accelerate, Datasets, Evaluate). Active Hugging Face profile with at least one public model/repo published. Experience in training and optimizing multi-modal models (vision-language/audio) is a big plus. Published work (arXiv, GitHub, blogs) or open-source contributions preferred. If you are passionate about AI and want to be a part of a dynamic and innovative team, then ZySec AI is the perfect place for you. Apply now and join us in shaping the future of artificial intelligence. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Description The role is based in Munich, Germany (this is not a remote opportunity). We offer immigration and relocation support. The vision of the Ontology Product Knowledge Team is to provide a standardized, semantically rich, easily discoverable, extensible, and universally applicable body of product knowledge that can be consistently utilized across customer shopping experiences, selling partner listing experiences and internal enrichment of product data. We aim to make product knowledge compelling, easy to use, and feature rich. Our work to build comprehensive product knowledge allows us to semantically understand a customer’s intent – whether that is a shopping mission or a seller offering products. We strive to make these experiences more intuitive for all customers. As an Ontologist, you work on a global team of knowledge builders to deliver world-class, intuitive, and comprehensive taxonomy and ontology models to optimize product discovery for Amazon web and mobile experiences. You collaborate with business partners and engineering teams to deliver knowledge-based solutions to enable product discoverability for customers. In this role, you will directly impact the customer experience as well as the company’s product knowledge foundation. Tasks And Responsibilities Develop logical, semantically rich, and extensible data models for Amazon's extensive product catalog Ensure our ontologies provide comprehensive domain coverage that are available for both human and machine ingestion and inference Create new schema using Generative Artificial Intelligence (generative AI) models Analyze website metrics and product discovery behaviors to make data-driven decisions on optimizing our knowledge graph data models globally Expand and refine the expansion of data retrieval techniques to utilize our extensive knowledge graph Contribute to team goal setting and future state vision Drive and coordinate cross-functional projects with a broad range of merchandisers, engineers, designers, and other groups that may include architecting new data solutions Develop team operational excellence programs, data quality initiatives and process simplifications Evangelize ontology and semantic technologies within and across teams at Amazon Develop and refine data governance and processes used by global Ontologists Mentor and influence peers Inclusive Team Culture: Our team has a global presence: we celebrate diverse cultures and backgrounds within our team and our customer base. We are committed to furthering our culture of inclusion, offering continuous access to internal affinity groups as well as highlighting diversity programs. Work/Life Harmony: Our team believes that striking the right balance between work and your outside life is key. Our work is not removed from everyday life, but instead is influenced by it. We offer flexibility in working hours and will work with you to facilitate your own balance between your work and personal life. Career Growth: Our team cares about your career growth, from your initial company introduction and training sessions, to continuous support throughout your entire career at Amazon. We recognize each team member as an individual, and we will build on your skills to help you grow. We have a broad mix of experience levels and tenures, and we are building an environment that celebrates knowledge sharing. Perks You will have the opportunity to support CX used by millions of customers daily and to work with data at a scale very few companies can offer. We have offices around the globe, and have the opportunity to be considered for global placement. You’ll receive on the job training and group development opportunities. Basic Qualifications Degree in Library Science, Information Systems, Linguistics or equivalent professional experience 5+ years of relevant work experience working in ontology and/or taxonomy roles Proven skills in data retrieval and data research techniques Ability to quickly understand complex processes and communicate them in simple language Experience creating and communicating technical requirements to engineering teams Ability to communicate to senior leadership (Director and VP levels) Experience with generative AI (e.g. creating prompts) Knowledge of Semantic Web technologies (RDF/s, OWL), query languages (SPARQL) and validation/reasoning standards (SHACL, SPIN) Knowledge of open-source and commercial ontology engineering editors (e.g. Protege, TopQuadrant products, PoolParty) Detail-oriented problem solver who is able to work in fast-changing environment and manage ambiguity Proven track record of strong communication and interpersonal skills Proficient English language skills Preferred Qualifications Master’s degree in Library Science, Information Systems, Linguistics or other relevant fields Experience building ontologies in the e-commerce and semantic search spaces Experience working with schema-level constructs (e.g. higher-level classes, punning, property inheritance) Proficiency in SQL, SPARQL Familiarity with software engineering life cycle Familiarity with ontology manipulation programming libraries Exposure to data science and/or machine learning, including graph embedding Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad - A85 Job ID: A2837060 Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in intelligent automation at PwC will focus on conducting process mining, designing next generation small- and large-scale automation solutions, and implementing intelligent process automation, robotic process automation and digital workflow solutions to help clients achieve operational efficiencies and reduce costs. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Design, develop, and maintain data pipelines and ETL processes for GenAI projects. Collaborate with data scientists and software engineers to implement machine learning models and algorithms. Optimize data infrastructure and storage solutions to ensure efficient and scalable data processing. Implement event-driven architectures to enable real-time data processing and analysis. Utilize containerization technologies like Kubernetes and Docker for efficient deployment and scalability. Develop and maintain data lakes for storing and managing large volumes of structured and unstructured data. Implement and integrate LLM frameworks (Langchain, Semantic Kernel) for advanced language processing and analysis. Collaborate with cross-functional teams to design and implement solution architectures for GenAI projects. Utilize cloud computing platforms such as Azure or AWS for data processing, storage, and deployment. Monitor and troubleshoot data pipelines and systems to ensure smooth and uninterrupted data flow. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data engineering processes. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Document data engineering processes, methodologies, and best practices. Maintain solution architecture certificates and stay current with industry best practices. Requirements Python Proficiency: Minimum 3 years of hands-on experience building applications with Python. Scalable System Design: Solid understanding of designing and architecting scalable Python applications, particularly for Gen AI use cases, with a strong understanding of various components and systems architecture patterns to make cohesive and decoupled, scalable applications. Web Frameworks: Familiarity with Python web frameworks (Flask, FastAPI) for building web applications around AI models. Modular Design & Security: Demonstrated ability to design applications with modularity, reusability, and security best practices in mind (session management, vulnerability prevention, etc.,). Cloud-Native Development: Familiarity with cloud-native development patterns and tools (e.g., REST APIs, microservices, serverless functions). Cloud Deployments: Experience deploying and managing containerized applications on Azure/AWS (Azure Kubernetes Service, Azure Container Instances, or similar). Version Control (Git): Strong proficiency in Git for effective code collaboration and management. CI/CD: Knowledge of continuous integration and deployment (CI/CD) practices on cloud platforms. 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in Python. Experience with data processing frameworks like Apache Spark or similar. Proficiency in SQL and database management systems. Preferred Skills Gen AI Frameworks: Experience with LLM frameworks or tools for interacting with LLMs such as LangChain, Semantic Kernel, LlamaIndex Data Pipelines: Experience in setting up data pipelines for model training and real-time inference. If you are passionate about GenAI technologies and have a proven track record in data engineering, join PwC US-Acceleration Center and be part of a dynamic team that is shaping the future of GenAI solutions. We offer a collaborative and innovative work environment where you can make a significant impact. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

A career within our Infrastructure practice will provide you with the opportunity to design, build, coordinate and maintain the IT environments for clients to run internal operations, collect data, monitor, develop and launch products. Infrastructure management consists of hardware, storage, compute, network and software layers. As a part of our Infrastructure Engineering team, you will be responsible for maintaining the critical IT systems which includes build, run and maintenance while providing technical support and training that aligns to industry leading practices. To really stand out and make us fit for the future in a constantly changing world, each and every one of us at PwC needs to be a purpose-led and values-driven leader at every level. To help us achieve this we have the PwC Professional; our global leadership development framework. It gives us a single set of expectations across our lines, geographies and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. Responsibilities As a Senior Associate, you'll work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Use feedback and reflection to develop self awareness, personal strengths and address development areas. Delegate to others to provide stretch opportunities, coaching them to deliver results. Demonstrate critical thinking and the ability to bring order to unstructured problems. Use a broad range of tools and techniques to extract insights from current industry or sector trends. Review your work and that of others for quality, accuracy and relevance. Know how and when to use tools available for a given situation and can explain the reasons for this choice. Seek and embrace opportunities which give exposure to different situations, environments and perspectives. Use straightforward communication, in a structured way, when influencing and connecting with others. Able to read situations and modify behavior to build quality relationships. Uphold the firm's code of ethics and business conduct. AI Engineer Overview We are seeking an exceptional AI Engineer to drive the development, optimization, and deployment of cutting-edge generative AI solutions for our clients. This role is at the forefront of applying generative models to solve real-world business challenges, requiring deep expertise in both the theoretical underpinnings and practical applications of generative AI. Core Qualifications Advanced degree (MS/PhD) in Computer Science, Machine Learning, or related field with a focus on generative models 3+ years of hands-on experience developing and deploying AI models in production environments with 1 year of experience in developing generative AI pilots, proofs of concept, and prototypes Deep understanding of state-of-the-art AI architectures (e.g., Transformers, VAEs, GANs, Diffusion Models) Expertise in PyTorch or TensorFlow, with a preference for experience in both Proficiency in Python and software engineering best practices for AI systems Technical Skills Required Demonstrated experience with large language models (LLMs) such as GPT, BERT, T5, etc. Practical understanding of generative AI frameworks (e.g., Hugging Face Transformers, OpenAI GPT, DALL-E) Familiarity with prompt engineering and few-shot learning techniques Expertise in MLOps and LLMOps practices, including CI/CD for ML models Strong knowledge of one or more cloud-based AI services (e.g., AWS SageMaker, Azure ML, Google Vertex AI) Preferred Proficiency in optimizing generative models for inference (quantization, pruning, distillation) Experience with distributed training of large-scale AI models Experience with model serving technologies (e.g., TorchServe, TensorFlow Serving, Triton Inference Server) Key Responsibilities Architect and implement end-to-end generative AI solutions, from data preparation to production deployment Develop custom AI models and fine-tune pre-trained models for specific client use cases Optimize generative models for production, balancing performance, latency, and resource utilization Design and implement efficient data pipelines for training and serving generative models Develop strategies for effective prompt engineering and few-shot learning in production systems Implement robust evaluation frameworks for generative AI outputs Collaborate with cross-functional teams to integrate generative AI capabilities into existing systems Address challenges related to bias, fairness, and ethical considerations in generative AI applications Project Delivery Lead the technical aspects of generative AI projects from pilot to production Develop proof-of-concepts and prototypes to demonstrate the potential of generative AI in solving client problems Conduct technical feasibility studies for applying generative AI to novel use cases Implement monitoring and observability solutions for deployed generative models Troubleshoot and optimize generative AI systems in production environments Client Engagement Provide expert technical guidance on generative AI capabilities and limitations to clients Collaborate with solution architects to design generative AI-powered solutions that meet client needs Present technical approaches and results to both technical and non-technical stakeholders Assist in scoping and estimating generative AI projects Innovation and Knowledge Sharing Stay at the forefront of generative AI research and industry trends Contribute to the company's intellectual property through patents or research publications Develop internal tools and frameworks to accelerate generative AI development Mentor junior team members on generative AI technologies and best practices Contribute to technical blog posts and whitepapers on generative AI applications The ideal candidate will have a proven track record of successfully deploying AI models in production environments, a deep understanding of the latest advancements in generative AI, and the ability to apply this knowledge to solve complex business problems. They should be passionate about pushing the boundaries of what's possible with generative AI and excited about the opportunity to shape the future of AI-driven solutions for our clients. Show more Show less

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description: Role Overview :- Monitor, evaluate, and optimize AI/LLM workflows in production environments. Ensure reliable, efficient, and high-quality AI system performance by building out an LLM Ops platform that is self-serve for the engineering and data science departments. Key Responsibilities:- Collaborate with data scientists and software engineers to integrate an LLM Ops platform (Opik by CometML) for existing AI workflows Identify valuable performance metrics (accuracy, quality, etc) for AI workflows and create on-going sampling evaluation processes using the LLM Ops platform that alert when metrics drop below thresholds Cross-team collaboration to create datasets and benchmarks for new AI workflows Run experiments on datasets and optimize performance via model changes and prompt adjustments Debug and troubleshoot AI workflow issues Optimize inference costs and latency while maintaining accuracy and quality Develop automations for LLM Ops platform integration to empower data scientists and software engineers to self-serve integration with the AI workflows they build Requirements:- Strong Python programming skills Experience with generative AI models and tools (OpenAI, Anthropic, Bedrock, etc) Knowledge of fundamental statistical concepts and tools in data science such as: heuristic and non-heuristic measurements in NLP (BLEU, WER, sentiment analysis, LLM-as-judge, etc), standard deviation, sampling rate, and a high level understanding of how modern AI models work (knowledge cutoffs, context windows, temperature, etc) Familiarity with AWS Understanding of prompt engineering concepts People skills: you will be expected to frequently collaborate with other teams to help to perfect their AI workflows Experience Level 3-5 years of experience in LLM/AI Ops, MLOps, Data Science, or MLE Pattern is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Requisition ID # 25WD85491 Position Overview We are looking for an experienced Principal Software Engineer to join our platform team focusing on AI/ML Platform (AMP). This team builds and maintains central components to fast track the development of new ML/AI models such as model development studio, feature store, model serving and model observability. The ideal candidate would have a background in ML Ops, Data engineering and DevOps with the experience of building high scale deployment architectures and observability. As an important contributor to our engineering team, you will help shape the future of our AI/ML capabilities, delivering solutions that inspire value for our organization. You will report directly to an Engineering Manager, and you will be based in Pune. Responsibilities System design: You will design, implement and manage software systems for the AI/ML Platform and orchestrate the full ML development lifecycle for the partner teams Mentoring: Spreading your knowledge, sharing best practices and doing design reviews to step up the expertise at the team level Multi-cloud architecture: Define components which leverages strengths from multiple cloud platforms (e.g., AWS, Azure) to optimize performance, cost, and scalability AI/ML observability: You will build systems for monitoring performance of AI/ML models and find insights on the underlying data such as drift detection, data fairness/bias and anomalies ML Solution Deployment: You will develop tools for building and deploying ML artefacts in production environments and facilitating a smooth transition from development to deployment Big Data Management: Automate and orchestrate tasks related to managing big data transformation and processing and build large-scale data stores for ML artifacts Scalable Services: Design and implement low-latency, scalable prediction, and inference services to support the diverse needs of our users Cross-Functional Collaboration: Collaborate across diverse teams, including machine learning researchers, developers, product managers, software architects, and operations, fostering a collaborative and cohesive work environment End-to-end ownership: You will take the end-to-end ownership of the components and work with other engineers in the team including design, architecture, implementation, rollout and onboarding support to partner teams, production on-call support, testing/verification, investigations etc Minimum Qualifications Educational Background: Bachelor’s degree in Computer Science or equivalent practical experience Experience: Over 8 years of experience in software development and engineering, delivering production systems and services Prior experience of working with MLOps team at the intersection of the expertise across ML model deployments, DevOps and data engineering Hands-on skills: Ability to fluently translate the design into high quality code in golang, python, Java Knowledge of DevOps practices, containerization, orchestration tools such as CI/CD, Terraform, Docker, Kubernetes, Gitops Demonstrate knowledge of distributed data processing frameworks, orchestrators, and data lake architectures using technologies such as Spark, Airflow, iceberg/ parquet formats Prior collaborations with Data science teams to deploy their models, setting up ML observability for inference level monitoring Exposure for building RAG based applications by collaborating with other product teams, Data scientists/AI engineers Demonstrate creative problem-solving skills with the ability to break down problems into manageable components Knowledge of Amazon AWS and/or Azure cloud for solutioning large scale application deployments Excellent communication and collaboration skills, fostering teamwork and effective information exchange Preferred Qualifications Experience in integrating with third party vendors Experience in latency optimization with the ability to diagnose, tune, and enhance the efficiency of serving systems Familiarity with tools and frameworks for monitoring and managing the performance of AI/ML models in production (e.g., MLflow, Kubeflow, TensorBoard) Familiarity with distributed model training/inference pipelines using (KubeRay or equivalent) Exposure to leveraging GPU computing for AI/ML workloads, including experience with CUDA, OpenCL, or other GPU programming tools, to significantly enhance model training and inference performance Exposure to ML libraries such as PyTorch, TensorFlow, XGBoost, Pandas, and ScikitLearn Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk – our Culture Code is at the core of everything we do. Our values and ways of working help our people thrive and realize their potential, which leads to even better outcomes for our customers. When you’re an Autodesker, you can be your whole, authentic self and do meaningful work that helps build a better future for all. Ready to shape the world and your future? Join us! Salary transparency Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, we also have a significant emphasis on discretionary annual cash bonuses, commissions for sales roles, stock or long-term incentive cash grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging and an equitable workplace where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site). Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru North, Karnataka, India

Remote

Job Description GalaxEye Space, is a deep-tech Space start-up spun off from IIT-Madras and is currently based in Bengaluru, Karnataka. We are dedicated to advancing the frontiers of space exploration. Our mission is to develop cutting-edge solutions that address the challenges of the modern space industry by specialising in developing a constellation of miniaturised, multi-sensor SAR+EO satellites. Our new age technology enables all-time, all-weather imaging, this with leveraging advanced processing and AI capabilities, we ensure near real-time data delivery and are glad to highlight that we have successfully demonstrated these imaging capabilities, the first of its kind in the world, across various platforms such as Drones as well as HAPS (High-Altitude Pseudo Satellites). Responsibilities Architect and maintain the build pipeline that converts R&D Python notebooks into immutable, versioned executables and libraries Optimize the Python codes for extracting maximum GPU performance Define and enforce coding standards, branching strategy, semantic release tags, and artifact-signing process Lead a team of full-stack developers to integrate Python inference services with the React-Electron UI via gRPC/REST contracts Stand-up and maintain an offline replica environment (VM or bare- metal) that mirrors the forward-deployed system; gate releases through this environment in CI Own automated test suites: unit, contract, regression, performance, and security scanning Coordinate multi-iteration hand-offs with forward engineers; triage returned diffs, merge approved changes, and publish patched releases Mentor the team, conduct code & design reviews, and drivecontinuous-delivery best practices in an air-gap-constrained context Requirements 5+ yrs in software engineering with at least 2 yrs technical-lead experience Deep Python expertise (packaging, virtualenv/venv, dependency pinning) and solid JavaScript/TypeScript skills for React-Electron CI/CD mastery (GitHub Actions, Jenkins, GitLab CI) with artifact repositories (Artifactory/Nexus) and infrastructure-as-code (Packer, Terraform, Ansible) Strong grasp of cryptographic signing, checksum verification, and secure supply-chain principles Experience releasing software to constrained or disconnected environments Additional Skills Knowledge of containerization (Docker/Podman) and offline image distribution Prior work on remote-sensing or geospatial analytics products Benefits Acquire valuable opportunities for learning and development through close collaboration with the founding team. Contribute to impactful projects and initiatives that drive meaningful change. We provide a competitive salary package that aligns with your expertise and experience. Enjoy comprehensive health benefits, including medical, dental, and vision coverage, ensuring the well-being of you and your family. Work in a dynamic and innovative environment alongside a dedicated and passionate team. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#5BBD6E;border-color:#5BBD6E;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bagalur, Karnataka, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What Youll Be Doing... Designing and Implementing ML Model pipelines (Batch and real-time) for efficient model training and serving/inference. Implementing and analyzing the performance of advanced algorithms (Specifically Deep Learning based ML Models). Solving the model inferencing failures/fallouts. Optimizing existing machine-learning Model Pipelines to ensure the training/inferencing is within the standard duration. Collaborating effectively with cross-functional teams to understand business needs and deliver impactful solutions. Contributing to developing robust and scalable distributed computing systems for large-scale data processing. Designing, developing, and implementing innovative AI/ML solutions using Python, CI/CD, public cloud platforms. Implementing model performance metrics pipeline for predictive models, covering different types of algorithms to adhere to Responsible AI. What were looking for... Youll need to have: Bachelor's degree or four or more years of work experience. Four or more years of relevant work experience. Experience in Batch Model Inferencing, Model-serving in Realtime. Knowledge on Frameworks such as BentoML TensorFlow Serving (TFX) or Triton. Solid expertise on GCP Cloud ML techstacks such as Bigquery, Data Proc, Airflow, Cloud Functions, Spanner, Data Flow. Very good experience on languages such as Python and PySpark. Expertise on Distributed computation and Multi-node distributed model training. Good understanding on GPU usage management. Experience on RAY Core and RAY Serve (batch and real-time models). Experience in CI/CD practices. Even better if you have one or more of the following: GCP Certifications or any Cloud Certification on AI/ML or Data. If Verizon and this role sound like a fit for you, we encourage you to apply even if you dont meet every even better qualification listed above. Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Diversity and Inclusion Were proud to be an equal opportunity employer. At Verizon, we know that diversity makes us stronger. We are committed to a collaborative, inclusive environment that encourages authenticity and fosters a sense of belonging. We strive for everyone to feel valued, connected, and empowered to reach their potential and contribute their best. Check out our diversity and inclusion page to learn more. Locations Bangalore, India Hyderabad, India Chennai, India Show more Show less

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Position: AI/ML Engineer (Python AWS REST APIs) Experience 3 to 5 Years Location: Indore Work from office Job Summary We are seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality of an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Responsibilities AI/ML Development Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for: Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience: Must-Have 35 years of experience in AI/ML (especially vision-based systems). Strong experience with PyTorch or TensorFlow for model development. Proficient in Python with experience building RESTful APIs. Hands-on experience with Amazon SageMaker, Lambda, API Gateway, and S3. Knowledge of OpenSSL/PyCryptodome or basic cryptographic concepts. Understanding of model deployment, serialization, and performance tuning. Nice-to-Have Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies: Python, PyTorch, TensorFlow FastAPI, Flask AWS: SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL Note: For I-VDES Project. Excellent communication and interpersonal skills Ability to work with tight deadlines Kindly share your resume on hr@advantal.net Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies