Jobs
Interviews

1480 Mlflow Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

About Apexon: Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Apexon brings together distinct core competencies – in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences – to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. We enable #HumanFirstDigital Key Responsibilities: Design, develop, and maintain CI/CD pipelines for ML models and data workflows. Collaborate with data science teams to productionize models using tools like MLflow, Kubeflow, or SageMaker. Automate training, validation, testing, and deployment of machine learning models. Monitor model performance, drift, and retraining needs. Ensure version control of datasets, code, and model artifacts. Implement model governance, audit trails, and reproducibility. Optimize model serving infrastructure (REST APIs, batch/streaming inference). Integrate ML solutions with cloud services (AWS, Azure, GCP). Ensure security, compliance, and reliability of ML systems. Required Skills and Qualifications: Bachelor’s or master’s degree in computer science, Engineering, Data Science, or related field. 5+ years of experience in MLOps, DevOps, or ML engineering roles. Strong experience with ML pipeline tools (MLflow, Kubeflow, TFX, SageMaker Pipelines). Proficiency in containerization and orchestration tools (Docker, Kubernetes, Airflow). Strong Python coding skills and familiarity with ML libraries (scikit-learn, TensorFlow, PyTorch). Experience with cloud platforms (AWS, Azure, GCP) and their ML services. Knowledge of CI/CD tools (GitLab CI/CD, Jenkins, GitHub Actions). Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, Sentry). Understanding of data versioning (DVC, LakeFS) and feature stores (Feast, Tecton). Strong grasp of model testing, validation, and monitoring in production environments. Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com) Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: o Group Health Insurance covering family of 4 o Term Insurance and Accident Insurance o Paid Holidays & Earned Leaves o Paid Parental LeaveoLearning & Career Development o Employee Wellness

Posted 1 week ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About Motadata Motadata is a renowned IT monitoring and management software company that has been transforming how businesses manage their ITOps since its inception. Our vision is to revolutionize the way organizations extract valuable insights from their IT networks. Bootstrapped since inception, Motadata has built up a formidable product suite comprising cutting-edge solutions, empowering enterprises to make informed decisions and optimize their IT infrastructure. As a market leader, we take pride in our ability to collect and analyze data from various sources, in any format, providing a unified view of IT monitoring data. Position Overview: We are seeking a Senior Machine Learning Engineer to join our team, focused on enhancing our AIOps and IT Service Management (ITSM) product through the integration of cutting-edge AI/ML features and functionality. As part of our innovative approach to revolutionizing the IT industry, you will play a pivotal role in leveraging data analysis techniques and advanced machine learning algorithms to drive meaningful insights and optimize our product's performance. With a particular emphasis on end-to-end machine learning lifecycle management and MLOps, you will collaborate with cross-functional teams to develop, deploy, and continuously improve AI-driven solutions tailored to our customers' needs. From semantic search and AI chatbots to root cause analysis based on metrics, logs, and traces, you will have the opportunity to tackle diverse challenges and shape the future of intelligent IT operations. Role & Responsibility: • Lead the end-to-end machine learning lifecycle, understand the business problem statement, convert into ML problem statement, data acquisition, exploration, feature engineering, model selection, training, evaluation, deployment, and monitoring (MLOps). • Should be able to lead the team of ML Engineers to solve the business problem and get it implemented in the product, QA validated and improvise based on the feedback from the customer. • Collaborate with product managers to understand business needs and translate them into technical requirements for AI/ML solutions. • Design, develop, and implement machine learning algorithms and models, including but not limited to statistics, regression, classification, clustering, and transformer-based architectures. • Preprocess and analyze large datasets to extract meaningful insights and prepare data for model training. • Build and optimize machine learning pipelines for model training and inference using relevant frameworks. • Fine-tune existing models and/or train custom models to address specific use cases. • Enhance the accuracy and performance of existing AI/ML models through monitoring, iterative refinement and optimization techniques. Collaborate closely with cross-functional teams to integrate AI/ML features seamlessly into our product, ensuring scalability, reliability, and maintainability. • Document your work clearly and concisely for future reference and knowledge sharing within the team. • Stay ahead of latest developments in machine learning research and technology and evaluate their potential applicability to our product roadmap. Skills and Qualifications: • Bachelor's or higher degree in Computer Science, Engineering, Mathematics, or related field. • Minimum 5+ years of experience as a Machine Learning Engineer or similar role. • Proficiency in data analysis techniques and tools to derive actionable insights from complex datasets. • Solid understanding and practical experience with machine learning algorithms and techniques, including statistics, regression, classification, clustering, and transformer-based models. • Hands-on experience with end-to-end machine learning lifecycle management and MLOps practices. • Proficiency in programming languages such as Python and familiarity with at least one of the following: Java,Golang, .NET, Rust. • Experience with machine learning frameworks/libraries (e.g., TensorFlow, PyTorch, scikit-learn) and MLOps tools (e.g., MLflow, Kubeflow). • Experience with ML.NET and other machine learning frameworks. • Familiarity with natural language processing (NLP) techniques and tools. • Excellent communication and teamwork skills, with the ability to effectively convey complex technical concepts to diverse audiences. • Proven track record of delivering high-quality, scalable machine learning solutions in a production environment.

Posted 1 week ago

Apply

3.0 years

20 - 25 Lacs

Bengaluru, Karnataka, India

On-site

What You Need To Succeed Master’s degree or equivalent experience in Machine Learning. 3+ years of industry experience in ML, software engineering, and data engineering. Proficiency in Python, PyTorch, TensorFlow, and Scikit-learn. Strong programming skills in Python and JavaScript. Hands-on experience with ML Ops practices. Ability to work with research and product teams. Excellent problem-solving skills and a track record of innovation. Passion for learning and applying the latest technological advancements. Position Overview As a MLE-2, you will design, implement, and optimize AI solutions while ensuring model success. You will lead the ML lifecycle from development to deployment, collaborate with cross-functional teams, and enhance AI capabilities to drive innovation and impact. Key Responsibilities Design and implement AI product features. Maintain and optimize existing AI systems. Train, evaluate, deploy, and monitor ML models. Design ML pipelines for experiment, model, and feature management. Implement A/B testing and scalable model inferencing APIs. Optimize GPU architectures, parallel training, and fine-tune models for improved performance. Deploy LLM solutions tailored to specific use cases. Ensure DevOps and LLMOps best practices using Kubernetes, Docker, and orchestration frameworks. Technical Requirements LLM & ML: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLMOps: MLFlow, Langchain, Langgraph, LangFlow, Langfuse, LlamaIndex, SageMaker, AWS Bedrock, Azure AI Databases: MongoDB, PostgreSQL, Pinecone, ChromDB Cloud: AWS, Azure DevOps: Kubernetes, Docker Languages: Python, SQL, JavaScript Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert What You'll Do Collaborate with cross-functional teams to design and build scalable ML solutions. Implement state-of-the-art ML techniques, including NLP, Generative AI, RAG, and Transformer architectures. Deploy and monitor ML models for high performance and reliability. Innovate through research, staying ahead of industry trends. Build scalable data pipelines following best practices. Present key insights and drive decision-making. Skills: scikit-learn,nlp,generative ai,kubernetes,python,ml ops,pytorch,data engineering,tensorflow,docker,aws,azure,machine learning,ml,javascript

Posted 1 week ago

Apply

6.0 years

20 - 25 Lacs

Bengaluru, Karnataka, India

Remote

:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: ml ops,software/data engineering,tensorflow,mongodb,llms,docker,machine learning,nlp,computer vision,python,azure,kubernetes,llms and modern nlp techniques,ml, ai,sql,llm technologies,python, pytorch/tensorflow, and scikit-learn,scikit-learn,postgresql,javascript,aws,pytorch

Posted 1 week ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Location: Preferred: Ahmedabad, Gandhinagar, Hybrid (Can consider Remote case to case) Department: COE Experience: 8+ Years (with hands-on AI/ML architecture experience) Education: Ph.D. or Master's in Computer Science, Data Science, Artificial Intelligence, or related fields Job Summary: We are seeking an experienced AI/ML Architect with a strong academic background and industry experience to lead the design and implementation of AI/ML solutions across diverse industry domains. The ideal candidate will act as a trusted advisor to clients, understanding their business problems, and crafting scalable AI/ML strategies and solutions aligned to their vision. Key Responsibilities: Engage with enterprise customers and stakeholders to gather business requirements, problem statements, and aspirations. Translate business challenges into scalable and effective AI/ML-driven solutions and architectures. Develop AI/ML adoption strategies tailored to customer maturity, use cases, and ROI potential. Design end-to-end ML pipelines and architecture (data ingestion, processing, model training, deployment, and monitoring). Collaborate with data engineers, scientists, and business SMEs to build and operationalize AI/ML solutions. Present technical and strategic insights to both technical and non-technical audiences, including executives. Lead POCs, pilots, and full-scale implementations. Stay updated on the latest research, technologies, tools, and trends in AI/ML and integrate them into customer solutions. Contribute to proposal development, technical documentation, and pre-sales engagements. Required Qualifications: Ph.D. or Master’s degree in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or related field. 8+ years of experience in the AI/ML field, with a strong background in solution architecture. Deep knowledge of machine learning algorithms, NLP, computer vision, deep learning frameworks (TensorFlow, PyTorch, etc.). Experience with cloud AI/ML services (AWS SageMaker, Azure ML, GCP Vertex AI, etc.). Strong communication and stakeholder management skills. Proven track record of working directly with clients to understand business needs and deliver AI solutions. Familiarity with MLOps practices and tools (Kubeflow, MLflow, Airflow, etc.). Preferred Skills: Experience in building GenAI or Agentic AI applications. Knowledge of data governance, ethics in AI, and explainable AI. Ability to lead cross-functional teams and mentor junior data scientists/engineers. Publications or contributions to AI/ML research communities (preferred but not mandatory).

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

Gurugram, Haryana

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space, it has footprint across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Data Scientist/Senior Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. ___________________________________________________________________________________________________________ Department: Data & Analytics Location: Cyber Hub, Gurugram, Haryana (5 days in office) Job Type: Permanent, Full-Time (40 Hours) Reports To: Senior Manager Data Science & Analytics ____________________________________________________________________________________________________________ About the role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Roles & Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses. Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3 - 4 years for Data Scientist Relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics, etc.) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.) #LI-DS1

Posted 1 week ago

Apply

4.0 - 9.0 years

0 Lacs

Andhra Pradesh, India

On-site

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in data science and machine learning engineering at PwC will focus on leveraging advanced analytics and machine learning techniques to extract insights from large datasets and drive data-driven decision making. You will work on developing predictive models, conducting statistical analysis, and creating data visualisations to solve complex business problems. The Opportunity When you join PwC Acceleration Centers (ACs), you step into a pivotal role focused on actively supporting various Acceleration Center services, from Advisory to Assurance, Tax and Business Services. In our innovative hubs, you’ll engage in challenging projects and provide distinctive services to support client engagements through enhanced quality and innovation. You’ll also participate in dynamic and digitally enabled training that is designed to grow your technical and professional skills. As part of the Data Science team you will design and deliver scalable AI applications that drive business transformation. As a Senior Associate you will analyze complex problems, mentor junior team members, and build meaningful client connections while navigating the evolving landscape of AI and machine learning. This role offers the chance to work on innovative technologies, collaborate with cross-functional teams, and contribute to creative solutions that shape the future of the industry. Responsibilities Design and implement scalable AI applications to facilitate business transformation Analyze intricate problems and propose practical solutions Mentor junior team members to enhance their skills and knowledge Establish and nurture meaningful relationships with clients Navigate the dynamic landscape of AI and machine learning Collaborate with cross-functional teams to drive innovative solutions Utilize advanced technologies to improve project outcomes Contribute to the overall strategy of the Data Science team What You Must Have Bachelor's Degree in Computer Science, Engineering, or equivalent technical discipline 4-9 years of experience in Data Science/ML/AI roles Oral and written proficiency in English required What Sets You Apart Proficiency in Python and data science libraries Hands-on experience with Generative AI and prompt engineering Familiarity with cloud platforms like Azure, AWS, GCP Understanding of production-level AI systems and CI/CD Experience with Docker, Kubernetes for ML workloads Knowledge of MLOps tooling and pipelines Demonstrated track record of delivering AI-driven solutions Preferred Knowledge/Skills Please reference About PwC CTIO – AI Engineering PwC’s Commercial Technology and Innovation Office (CTIO) is at the forefront of emerging technology, focused on building transformative AI-powered products and driving enterprise innovation. The AI Engineering team within CTIO is dedicated to researching, developing, and operationalizing cutting-edge technologies such as Generative AI, Large Language Models (LLMs), AI Agents, and more. Our mission is to continuously explore what's next—enabling business transformation through scalable AI/ML solutions while remaining grounded in research, experimentation, and engineering excellence.ill categories for job description details. Role Overview We are seeking a Senior Associate – Data Science/ML/DL/GenAI to join our high-impact, entrepreneurial team. This individual will play a key role in designing and delivering scalable AI applications, conducting applied research in GenAI and deep learning, and contributing to the team’s innovation agenda. This is a hands-on, technical role ideal for professionals passionate about AI-driven transformation. Key Responsibilities Design, develop, and deploy machine learning, deep learning, and Generative AI solutions tailored to business use cases. Build scalable pipelines using Python (and frameworks such as Flask/FastAPI) to operationalize data science models in production environments. Prototype and implement solutions using state-of-the-art LLM frameworks such as LangChain, LlamaIndex, LangGraph, or similar. Also developing applications in streamlit/chainlit for demo purposes. Design advanced prompts and develop agentic LLM applications that autonomously interact with tools and APIs. Fine-tune and pre-train LLMs (HuggingFace and similar libraries) to align with business objectives. Collaborate in a cross-functional setup with ML engineers, architects, and product teams to co-develop AI solutions. Conduct R&D in NLP, CV, and multi-modal tasks, and evaluate model performance with production-grade metrics. Stay current with AI research and industry trends; continuously upskill to integrate the latest tools and methods into the team’s work. Required Skills & Experience 4 to 9 years of experience in Data Science/ML/AI roles. Bachelor’s degree in Computer Science, Engineering, or equivalent technical discipline (BE/BTech/MCA). Proficiency in Python and related data science libraries: Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, PyTorch, Keras, etc. Hands-on experience with Generative AI, including prompt engineering, LLM fine-tuning, and deployment. Experience with Agentic LLMs and task orchestration using tools like LangGraph or AutoGPT-like flows. Strong knowledge of NLP techniques, transformer architectures, and text analysis. Proven experience working with cloud platforms (preferably Azure; AWS/GCP also considered). Understanding of production-level AI systems including CI/CD, model monitoring, and cloud-native architecture. (Need not develop from scratch) Familiarity with ML algorithms: XGBoost, GBM, k-NN, SVM, Decision Forests, Naive Bayes, Neural Networks, etc. Exposure to deploying AI models via APIs and integration into larger data ecosystems. Strong understanding of model operationalization and lifecycle management. Experience with Docker, Kubernetes, and containerized deployments for ML workloads. Use of MLOps tooling and pipelines (e.g., MLflow, Azure ML, SageMaker, etc.). Experience in full-stack AI applications, including visualization (e.g., PowerBI, D3.js). Demonstrated track record of delivering AI-driven solutions as part of large-scale systems.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

We are seeking a skilled and innovative Machine Learning Engineer with over 5 years of experience to design, develop, and deploy scalable ML solutions. You will work closely with data scientists, software engineers, and product teams to solve real-world problems using state-of-the-art machine learning and deep learning techniques. Key Responsibilities Design, build, and optimize machine learning models and pipelines for classification, regression, clustering, recommendation, and forecasting Implement and fine-tune deep learning models using frameworks like TensorFlow, PyTorch, or Keras Collaborate with cross-functional teams to understand business problems and convert them into technical solutions Develop data preprocessing, feature engineering, and model evaluation strategies Build and deploy models into production using CI/CD practices and MLOps tools Monitor model performance and retrain as necessary to ensure accuracy and reliability Create and maintain technical documentation, and provide knowledge sharing within the team Stay updated on the latest research, tools, and techniques in machine learning and AI Required Skills & Experience 5+ years of experience in Machine Learning Engineering or Applied Data Science Proficiency in Python and ML libraries such as scikit-learn, pandas, NumPy, TensorFlow, or PyTorch Solid understanding of mathematics, statistics, and ML/DL algorithms Experience with end-to-end ML lifecycle from data collection and cleaning to model deployment and monitoring Strong knowledge of SQL and working with large datasets Experience deploying ML models on cloud platforms (e.g., AWS, Azure, GCP) Familiarity with Docker, Kubernetes, MLflow, or other MLOps tools Good understanding of REST APIs, microservices, and backend integration Nice To Have Exposure to NLP, Computer Vision, or Generative AI techniques Experience with big data technologies like Spark, Hadoop, or Hive Working knowledge of data labeling, AutoML, or active learning Experience with feature stores, model registries, or streaming data (Kafka, Flink) Educational Qualification Bachelors or Masters degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field Additional certifications in AI/ML are a plus (ref:hirist.tech)

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Role : AI Engineer. Experience : 3 to 6 years. Work Mode : WFO / Hybrid /Remote if applicable. Immediate Joiners. Job Description An ideal candidate will have experience, as we are building an AI-powered workforce intelligence platform that helps businesses optimize talent strategies, enhance decision making, and drive operational efficiency. Our software leverages cutting-edge AI, NLP, and data science to extract meaningful insights from vast amounts of structured and unstructured workforce data. As part of our new AI team, you will have the opportunity to work on real-world AI applications, contribute to innovative NLP solutions, and gain hands on experience in building AI-driven products from the ground up. Required Skills & Qualification Strong experience in Python programming. 3 + years of experience in Data Science/NLP (Freshers with strong NLP projects are welcome). Proficiency in Python, PyTorch, Scikit-learn, and NLP libraries (NLTK, Hugging Face). Basic knowledge of cloud platforms (AWS, GCP, or Azure). Familiarity with MLOps tools like Airflow, MLflow, or similar. Experience with Big Data processing (Spark, Pandas, or Dask). Experience with SQL for data manipulation and analysis. Assist in designing, training, and optimizing ML/NLP models using PyTorch, NLTK, Scikitlearn, and Transformer models (BERT,GPT,etc.) Experience with GenAI tech stacks including foundational models (GPT-4, Claude, Gemini), frameworks (LangChain, LlamaIndex), and deployment tools (Hugging Face, AWS Bedrock, Vertex AI, vector DBs like FAISS/Pinecone). Help deploy AI/ML solutions on AWS, GCP, or Azure. Collaborate with engineers to integrate AI models into production systems. Expertise in using SQL and Python to clean, preprocess, and analyze large datasets. Learn & Innovate Stay updated with the latest advancements in NLP, AI, and ML frameworks. Strong analytical and problem-solving skills. Willingness to learn, experiment, and take ownership in a fast-paced startup environment. Nice To Have Requirements For The Candidate Desire to grow within the company. Team player and Quicker learner. Performance-driven. Strong networking and outreach skills. Exploring aptitude & killer attitude. Ability to communicate and collaborate with the team at ease. Drive to get the results and not let anything get in your way. Critical and analytical thinking skills, with a keen attention to detail. Demonstrate ownership and strive for excellence in everything you do. Demonstrate a high level of curiosity and keep abreast of the latest technologies & tools. Ability to pick up new software easily and represent yourself peers and co-ordinate during meetings with Customers. What We Offer We offer a market-leading salary along with a comprehensive benefits package to support your well-being. Enjoy a hybrid or remote work setup that prioritizes work-life balance and personal wellbeing. We invest in your career through continuous learning and internal growth opportunities. Be part of a dynamic, inclusive, and vibrant workplace where your contributions are recognized and rewarded. We believe in straightforward policies, open communication, and a supportive work environment where everyone thrives. (ref:hirist.tech)

Posted 1 week ago

Apply

50.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us At Digilytics, we build and deliver easy to use AI products to the secured lending and consumer industry sectors. In an ever-crowded world of clever technology solutions looking for a problem to solve, our solutions start with a keen understanding of what creates and what destroys value in our clients business. Founded by Arindom Basu, the leadership of Digilytics is deeply rooted in leveraging disruptive technology to drive profitable business growth. With over 50 years of combined experience in technology-enabled change, the Digilytics leadership is focused on building a values-first firm that will stand the test of time. We are currently focussed on developing a product, Digilytics RevEL, to revolutionise loan origination for secured lending covering mortgages, motor and business lending. The product leverages the latest AI techniques to process loan application and loan documents to deliver improved customer and colleague experience, while improving productivity and throughput and reducing processing costs. About The Role Digilytics is pioneering the development of intelligent mortgage solutions in International and Indian markets. We are looking for Data Scientist who has strong NLP and computer vision expertise. We are looking for experienced data scientists, who have the aspirations and appetite for working in a start-up environment, and with relevant industry experience to make a significant contribution to our DigilyticsTM platform and solutions. Primary focus would be to apply machine learning techniques for data extraction from documents from variety of formats including scans and handwritten documents. Responsibilities Develop a learning model for high accuracy extraction and validation of documents, e.g. in mortgage industry Work with state-of-the-art language modelling approaches such as transformer-based architectures while integrating capabilities across NLP, computer vision, and machine learning to build robust multi-modal AI solutions Understand the DigilyticsTM vision and help in creating and maintaining a development roadmap Interact with clients and other team members to understand client-specific requirements of the platform Contribute to platform development team and deliver platform releases in a timely manner Liaise with multiple stakeholders and coordinate with our onshore and offshore entities Evaluate and compile the required training datasets from internal and public sources and contribute to the data pre-processing phase. Expected And Desired Skills Either of the following Deep learning frameworks PyTorch (preferred) or Tensorflow Good understanding in designing, developing, and optimizing Large Language Models (LLMs), with hands-on experience in leveraging cutting-edge advancements in NLP and generative AI Skilled in customizing LLMs for domain-specific applications through advanced fine-tuning, prompt engineering, and optimization strategies such as LoRA, quantization, and distillation. Knowledge of model versioning, serving, and monitoring using tools like MLflow, FastAPI, Docker, vLLM. Python used for analytics applications including data pre-processing, EDA, statistical analysis, machine learning model performance evaluation and benchmarking Good scripting and programming skills to integrate with other external applications Good interpersonal skills and the ability to communicate and explain models Ability to work in unfamiliar business areas and to use your skills to create solutions Ability to both work in and lead a team and to deliver and accept peer review Flexible approach to working environment and hours Experience Between 4-6 years of relevant experience Hands-on experience with Python and/or R Machine Learning Deep Learning (desirable) End to End development of a Deep Learning based model covering model selection, data preparation, training, hyper-parameter optimization, evaluation, and performance reporting. Proven experience working in both smaller and larger organisations having multicultural exposure Domain and industry experience by serving customers in one or more of these industries - Financial Services, Professional Services, other Consumer Industries Education Background A Bachelors degree in the fields of study such as Computer Science, Mathematics, Statistics, and Data Science with strong programming content from a leading institute An advanced degree such as a Master's or PhD is an advantage (ref:hirist.tech)

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

The ideal candidate for this position in Ahmedabad should be a graduate with at least 3 years of experience. At Bytes Technolab, we strive to create a cutting-edge workplace infrastructure that empowers our employees and clients. Our focus on utilizing the latest technologies enables our development team to deliver high-quality software solutions for a variety of businesses. You will be responsible for leveraging your 3+ years of experience in Machine Learning and Artificial Intelligence to contribute to our projects. Proficiency in Python programming and relevant libraries such as NumPy, Pandas, and scikit-learn is essential. Hands-on experience with frameworks like PyTorch, TensorFlow, Keras, Facenet, and OpenCV will be key in your role. Your role will involve working with GPU acceleration for deep learning model development using CUDA, cuDNN. A strong understanding of neural networks, computer vision, and other AI technologies will be crucial. Experience with Large Language Models (LLMs) like GPT, BERT, LLaMA, and familiarity with frameworks such as LangChain, AutoGPT, and BabyAGI are preferred. You should be able to translate business requirements into ML/AI solutions and deploy models on cloud platforms like AWS SageMaker, Azure ML, and Google AI Platform. Proficiency in ETL pipelines, data preprocessing, and feature engineering is required, along with experience in MLOps tools like MLflow, Kubeflow, or TensorFlow Extended (TFX). Expertise in optimizing ML/AI models for performance and scalability across different hardware architectures is necessary. Knowledge of Natural Language Processing (NLP), Reinforcement Learning, and data versioning tools like DVC or Delta Lake is a plus. Skills in containerization tools like Docker and orchestration tools like Kubernetes will be beneficial for scalable deployments. You should have experience in model evaluation, A/B testing, and establishing continuous training pipelines. Working in Agile/Scrum environments with cross-functional teams, understanding ethical AI principles, model fairness, and bias mitigation techniques are important. Familiarity with CI/CD pipelines for machine learning workflows and the ability to communicate complex concepts to technical and non-technical stakeholders will be valuable.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

At PwC, our data and analytics team focuses on utilizing data to drive insights and support informed business decisions. We leverage advanced analytics techniques to assist clients in optimizing their operations and achieving strategic goals. As a data analysis professional at PwC, your role will involve utilizing advanced analytical methods to extract insights from large datasets, enabling data-driven decision-making. Your expertise in data manipulation, visualization, and statistical modeling will be pivotal in helping clients solve complex business challenges. PwC US - Acceleration Center is currently seeking a highly skilled MLOps/LLMOps Engineer to play a critical role in deploying, scaling, and maintaining Generative AI models. This position requires close collaboration with data scientists, ML/GenAI engineers, and DevOps teams to ensure the seamless integration and operation of GenAI models within production environments at PwC and for our clients. The ideal candidate will possess a strong background in MLOps practices and a keen interest in Generative AI technologies. With a preference for candidates with 4+ years of hands-on experience, core qualifications for this role include: - 3+ years of experience developing and deploying AI models in production environments, alongside 1 year of working on proofs of concept and prototypes. - Proficiency in software development, including building and maintaining scalable, distributed systems. - Strong programming skills in languages such as Python and familiarity with ML frameworks like TensorFlow and PyTorch. - Knowledge of containerization and orchestration tools like Docker and Kubernetes. - Understanding of cloud platforms such as AWS, GCP, and Azure, including their ML/AI service offerings. - Experience with continuous integration and delivery tools like Jenkins, GitLab CI/CD, or CircleCI. - Familiarity with infrastructure as code tools like Terraform or CloudFormation. Key Responsibilities: - Develop and implement MLOps strategies tailored for Generative AI models to ensure robustness, scalability, and reliability. - Design and manage CI/CD pipelines specialized for ML workflows, including deploying generative models like GANs, VAEs, and Transformers. - Monitor and optimize AI model performance in production, utilizing tools for continuous validation, retraining, and A/B testing. - Collaborate with data scientists and ML researchers to translate model requirements into scalable operational frameworks. - Implement best practices for version control, containerization, and orchestration using industry-standard tools. - Ensure compliance with data privacy regulations and company policies during model deployment. - Troubleshoot and resolve issues related to ML model serving, data anomalies, and infrastructure performance. - Stay updated with the latest MLOps and Generative AI developments to enhance AI capabilities. Project Delivery: - Design and implement scalable deployment pipelines for ML/GenAI models to transition them from development to production environments. - Oversee the setup of cloud infrastructure and automated data ingestion pipelines to meet GenAI workload requirements. - Create detailed documentation for deployment pipelines, monitoring setups, and operational procedures. Client Engagement: - Collaborate with clients to understand their business needs and design ML/LLMOps solutions. - Present technical approaches and results to technical and non-technical stakeholders. - Conduct training sessions and workshops for client teams. - Create comprehensive documentation and user guides for clients. Innovation And Knowledge Sharing: - Stay updated with the latest trends in MLOps/LLMOps and Generative AI. - Develop internal tools and frameworks to accelerate model development and deployment. - Mentor junior team members and contribute to technical publications. Professional And Educational Background: - Any graduate / BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Python Engineer with 2-4 years of experience, you will be responsible for building, deploying, and scaling Python applications along with AI/ML solutions. Your strong programming skills will be put to use in developing intelligent solutions and collaborating closely with clients and software engineers to implement machine learning models. You should be an expert in Python, with advanced knowledge of Flask/FastAPI and server programming to implement complex business logic. Understanding fundamental design principles behind scalable applications is crucial. Independently designing, developing, and deploying machine learning models and AI algorithms tailored to business requirements will be a key aspect of your role. Your responsibilities will include solving complex technical challenges through innovative AI/ML solutions, building and maintaining integrations (e.g., APIs) for machine learning models, conducting data preprocessing and feature engineering, and optimizing datasets for model training and inference. Monitoring and continuously improving model performance in production environments, focusing on scalability and efficiency, will also be part of your tasks. Managing model deployment, monitoring, and scaling using tools like Docker, Kubernetes, and cloud services will be essential. You will need to develop integration strategies for smooth communication between APIs and troubleshoot integration issues. Creating and maintaining comprehensive documentation for AI/ML projects will be necessary, along with staying updated on emerging trends and technologies in AI/ML. Key Skills Required: - Proficiency in Python, R, or similar languages commonly used in ML/AI development - Hands-on experience with TensorFlow, PyTorch, scikit-learn, or similar ML libraries - Strong knowledge of data preprocessing, data cleaning, and feature engineering - Familiarity with model deployment using Docker, Kubernetes, or cloud platforms - Understanding of statistical methods, probability, and data-driven decision-making processes - Proficient in querying databases for ML projects - Experience with ML lifecycle management tools like MLflow, Kubeflow - Familiarity with NLP frameworks for language-based AI solutions - Exposure to computer vision techniques - Experience with managed ML services like AWS SageMaker, Azure Machine Learning, or Google Cloud AI Platform - Familiarity with agile workflows and DevOps or CI/CD pipelines Good to Have Skills: - Exposure to big data processing tools like Spark, Hadoop - Experience with agile development methodologies The job location for this role is in Ahmedabad/Pune, and the required educational qualifications include a UG degree in BE/BTech or PG degree in ME/M-Tech/MCA/MSC-IT/Data Science, AI, Machine Learning, or a related field.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

As a highly skilled Senior Machine Learning Engineer, you will leverage your expertise in Deep Learning, Large Language Models (LLMs), and MLOps/LLMOps to design, optimize, and deploy cutting-edge AI solutions. Your responsibilities will include developing and scaling deep learning models, fine-tuning LLMs (e.g., GPT, Llama), and implementing robust deployment pipelines for production environments. You will be responsible for designing, training, fine-tuning, and optimizing deep learning models (CNNs, RNNs, Transformers) for various applications such as NLP, computer vision, or multimodal tasks. Additionally, you will fine-tune and adapt LLMs for domain-specific tasks like text generation, summarization, and semantic similarity. Experimenting with RLHF (Reinforcement Learning from Human Feedback) and alignment techniques will also be part of your role. In the realm of Deployment & Scalability (MLOps/LLMOps), you will build and maintain end-to-end ML pipelines for training, evaluation, and deployment. Deploying LLMs and deep learning models in production environments using frameworks like FastAPI, vLLM, or TensorRT is crucial. You will optimize models for low-latency, high-throughput inference and implement CI/CD workflows for ML systems using tools like MLflow and Kubeflow. Monitoring & Optimization will involve setting up logging, monitoring, and alerting for model performance metrics such as drift, latency, and accuracy. Collaborating with DevOps teams to ensure scalability, security, and cost-efficiency of deployed models will also be part of your responsibilities. The ideal candidate will possess 5-7 years of hands-on experience in Deep Learning, NLP, and LLMs. Strong proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers, and LLM frameworks is essential. Experience with model deployment tools like Docker, Kubernetes, and FastAPI, along with knowledge of MLOps/LLMOps best practices and familiarity with cloud platforms (AWS, GCP, Azure) are required qualifications. Preferred qualifications include contributions to open-source LLM projects, showcasing your commitment to advancing the field of machine learning.,

Posted 1 week ago

Apply

20.0 years

0 Lacs

Sholinganallur, Tamil Nadu, India

On-site

About Us For over 20 years, Smart Data Solutions has been partnering with leading payer organizations to provide automation and technology solutions enabling data standardization and workflow automation. The company brings a comprehensive set of turn-key services to handle all claims and claims-related information regardless of format (paper, fax, electronic), digitizing and normalizing for seamless use by payer clients. Solutions include intelligent data capture, conversion and digitization, mailroom management, comprehensive clearinghouse services and proprietary workflow offerings. SDS’ headquarters are just outside of St. Paul, MN and leverages dedicated onshore and offshore resources as part of its service delivery model. The company counts over 420 healthcare organizations as clients, including multiple Blue Cross Blue Shield state plans, large regional health plans and leading independent TPAs, handling over 500 million transactions of varying types annually with a 98%+ customer retention rate. SDS has also invested meaningfully in automation and machine learning capabilities across its tech-enabled processes to drive scalability and greater internal operating efficiency while also improving client results. SDS recently partnered with a leading growth-oriented investment firm, Parthenon Capital, to further accelerate expansion and product innovation. Location : 6th Floor, Block 4A, Millenia Business Park, Phase II MGR Salai, Kandanchavadi , Perungudi Chennai 600096, India. Smart Data Solutions is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, age, marital status, pregnancy, genetic information, or other legally protected status To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed above are representative of the knowledge skill and or ability required. Reasonable accommodation may be made to enable individuals with disabilities to perform essential job functions. Due to access to Protected Healthcare Information, employees in this role must be free of felony convictions on a background check report. Responsibilities Duties and Responsibilities include but are not limited to: Design and build ML pipelines for OCR extraction, document image processing, and text classification tasks. Fine-tune or prompt large language models (LLMs) (e.g., Qwen, GPT, LLaMA , Mistral) for domain-specific use cases. Develop systems to extract structured data from scanned or unstructured documents (PDFs, images, TIFs). Integrate OCR engines (Tesseract, EasyOCR , AWS Textract , etc.) and improve their accuracy via pre-/post-processing. Handle natural language processing (NLP) tasks such as named entity recognition (NER), summarization, classification, and semantic similarity. Collaborate with product managers, data engineers, and backend teams to productionize ML models. Evaluate models using metrics like precision, recall, F1-score, and confusion matrix, and improve model robustness and generalizability. Maintain proper versioning, reproducibility, and monitoring of ML models in production. The duties set forth above are essential job functions for the role. Reasonable accommodations may be made to enable individuals with disabilities to perform essential job functions. Skills And Qualifications 4–5 years of experience in machine learning, NLP, or AI roles Proficiency with Python and ML libraries such as PyTorch , TensorFlow, scikit-learn, Hugging Face Transformers. Experience with LLMs (open-source or proprietary), including fine-tuning or prompt engineering. Solid experience in OCR tools (Tesseract, PaddleOCR , etc.) and document parsing. Strong background in text classification, tokenization, and vectorization techniques (TF-IDF, embeddings, etc.). Knowledge of handling unstructured data (text, scanned images, forms). Familiarity with MLOps tools: MLflow , Docker, Git, and model serving frameworks. Ability to write clean, modular, and production-ready code. Experience working with medical, legal, or financial document processing. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate ) and semantic search. Understanding of document layout analysis (e.g., LayoutLM , Donut, DocTR ). Familiarity with cloud platforms (AWS, GCP, Azure) and deploying models at scale

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsibilities Lead 4-8 data scientists to deliver ML capabilities within a Databricks-Azure platform Guide delivery of complex ML systems that align with product and platform goals Balance scientific rigor with practical engineering Define model lifecycle, tooling, and architectural direction Requirements Skills & Experience Advanced ML: Supervised/unsupervised modeling, time-series, interpretability, MLflow, Spark, TensorFlow/PyTorch Engineering: Feature pipelines, model serving, CI/CD, production deployment Leadership: Mentorship, architectural alignment across subsystems, experimentation strategy Communication: Translate ML results into business impact Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day This is a contract role based in Abu Dhabi. If relocation from India is required, the company will cover travel and accommodation expenses in addition to your salary About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsibilities Act as both a hands-on tech lead and product manager Deliver data/ML platforms and pipelines in a Databricks-Azure environment Lead a small delivery team and coordinate with enabling teams for product, architecture, and data science Translate business needs into product strategy and technical delivery with a platform-first mindset Requirements Skills & Experience Technical: Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, Azure Product: Agile delivery, discovery cycles, outcome-focused planning, trunk-based development Collaboration: Able to coach engineers, work with cross-functional teams, and drive self-service platforms Communication: Clear in articulating decisions, roadmap, and priorities Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day This is a contract role based in Abu Dhabi. If relocation from India is required, the company will cover travel and accommodation expenses in addition to your salary About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Vyva Consulting Inc. is a trusted partner in Sales Performance Management (SPM) and Incentive Compensation Management (ICM), specializing in delivering top-tier software consulting solutions. We help organizations optimize their sales operations, boost revenue, and maximize value. Our seasoned experts work with leading products such as Xactly, Varicent, and SPIFF, offering comprehensive implementation and post-implementation services. We focus on enhancing sales compensation strategies to drive business success. Role Description This is a full-time, on-site role for an Artificial Intelligence Intern, located in Hyderabad. We are seeking a motivated AI Engineer Intern to join our team and contribute to cutting-edge AI/ML projects. This internship offers hands-on experience with large language models, generative AI, and modern AI frameworks while working on real-world applications that impact our business objectives. What You'll DoCore Responsibilities LLM Integration & Development: Build and prototype LLM-powered features using frameworks like LangChain, OpenAI SDK, or similar tools for content automation and intelligent workflows RAG System Implementation: Design and optimize Retrieval-Augmented Generation systems including document ingestion, chunking strategies, embedding generation, and vector database integration Data Pipeline Development: Create robust data pipelines for AI/ML workflows, including data collection, cleaning, preprocessing, and annotation of large datasets Model Experimentation: Conduct experiments to evaluate, fine-tune, and optimize AI models for accuracy, performance, and scalability across different use cases Vector Database Operations: Implement similarity search solutions using vector databases (FAISS, Pinecone, Chroma) for intelligent Q&A, content recommendation, and context-aware responses Prompt Engineering: Experiment with advanced prompt engineering techniques to optimize outputs from generative models and ensure content quality Research & Innovation: Stay current with latest AI/ML advancements, research new architectures and techniques, and build proof-of-concept implementations Technical Implementation Deploy AI micro services and agents using containerization (Docker) and orchestration tools Collaborate with cross-functional teams (product, design, engineering) to align AI features with business requirements Create comprehensive documentation including system diagrams, API specifications, and implementation guides Analyze model performance metrics, document findings, and propose data-driven improvements Participate in code reviews and contribute to best practices for AI/ML development Required QualificationsEducation & Experience Currently pursuing or recently completed Bachelor's/Master's degree in Computer Science, Data Science, AI/ML, or related field 6+ months of hands-on experience with AI/ML projects (academic, personal, or professional) Demonstrable portfolio of AI/ML projects via GitHub repositories, Jupyter notebooks, or deployed applications Technical Skills Programming: Strong Python proficiency with experience in AI/ML libraries (NumPy, Pandas, Scikit-learn) LLM Experience: Practical experience with large language models (OpenAI GPT, Claude, open-source models) including API integration and fine-tuning AI Frameworks: Familiarity with at least one: LangChain, OpenAI Agents SDK, AutoGen, or similar agentic AI frameworks RAG Architecture: Understanding of RAG system components and prior implementation experience (even in academic projects) Vector Databases: Experience with vector similarity search using FAISS, Chroma, Pinecone, or similar tools Deep Learning: Familiarity with PyTorch or TensorFlow for model development and fine-tuning Screening Criteria To effectively evaluate candidates, we will assess: Portfolio Quality: Live demos or well-documented projects showing AI/ML implementation Technical Depth: Ability to explain RAG architecture, vector embeddings, and LLM fine-tuning concepts Problem-Solving: Approach to handling real-world AI challenges like hallucination, context management, and model evaluation Code Quality: Clean, documented Python code with proper version control practices Preferred QualificationsAdditional Technical Skills Full-Stack Development: Experience building web applications with AI/ML backends Data Analytics: Proficiency in data manipulation (Pandas/SQL), visualization (Matplotlib/Seaborn), and statistical analysis MLOps/DevOps: Experience with Docker, Kubernetes, MLflow, or CI/CD pipelines for ML models Cloud Platforms: Familiarity with AWS, Azure, or GCP AI/ML services Databases: Experience with both SQL (PostgreSQL) and NoSQL (Elasticsearch, MongoDB) databases Soft Skills & Attributes Analytical Mindset: Strong problem-solving skills with attention to detail in model outputs and data quality Communication: Ability to explain complex AI concepts clearly to both technical and non-technical stakeholders Collaboration: Proven ability to work effectively in cross-functional teams Learning Agility: Demonstrated ability to quickly adapt to new technologies and frameworks Initiative: Self-motivated with ability to work independently and drive projects forward What We OfferProfessional Growth Mentorship: Work directly with senior AI engineers and receive structured guidance Real Impact: Contribute to production AI systems used by real customers Learning Opportunities: Access to latest AI tools, frameworks, and industry conferences Full-Time Conversion: Potential for full-time offer based on performance and business needs Work Environment Employee-First Culture: Flexible work arrangements with emphasis on results Innovation Focus: Opportunity to work on cutting-edge AI applications Collaborative Team: Supportive environment that values diverse perspectives and ideas Competitive Compensation: Market-competitive internship stipend Application RequirementsPortfolio Submission Please include the following in your application: GitHub Repository: Link to your best AI/ML projects with detailed README files Project Demo: Video walkthrough or live demo of your most impressive AI application Technical Blog/Documentation: Any technical writing about AI/ML concepts or implementations Resume: Highlighting relevant coursework, projects, and any AI/ML experience Technical Assessment Qualified candidates will complete a technical assessment covering: Python programming and AI/ML libraries LLM integration and prompt engineering RAG system design and implementation Vector database operations and similarity search Model evaluation and optimization techniques Ready to shape the future of AI? Apply now and join our team of innovative engineers building next-generation AI solutions.

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Company Name: Bilight Solutions Location: Chennai, Tamil Nadu (Nungambakkam) Job Type: Full-time Experience Level: Mid-level (2-5 years) About the Role We are looking for a skilled and motivated Data Scientist to join our team in Chennai. This is a work-from-office position located in the Nungambakkam area. You will be responsible for leveraging data to solve complex business challenges, driving strategic decisions, and creating innovative solutions. You will work on the full lifecycle of data science projects, from problem formulation and data collection to model development, deployment, and monitoring. We are looking for candidates who can join immediately and have a strong ability to work collaboratively and lead. Key Responsibilities Team Leadership: Take on a leadership role within the data science team, guiding junior members and ensuring project success. Problem Solving: Collaborate with stakeholders to understand business problems and formulate data-driven solutions. Data Analysis: Collect, clean, and analyze large and complex datasets to identify trends, patterns, and insights. Model Development: Design, develop, and implement statistical models, machine learning algorithms, and predictive systems. ETL & Data Pipelines: Be well-versed in the ETL process to build and maintain data pipelines for efficient data flow. Data Visualization: Utilize Tableau Prep, Desktop, and Server to create compelling data visualizations and interactive dashboards. Communication: Present findings and recommendations to both technical and non-technical audiences, telling a clear and compelling story with data. Collaboration: Work closely with data engineers, software developers, product managers, and business analysts. Continuous Learning: Stay up-to-date with the latest advancements in data science, machine learning, and AI, and apply them to business problems. Required Qualifications Education: Bachelor’s, Master’s, or Ph.D. in a quantitative field such as Computer Science, Statistics, Mathematics, Physics, Engineering, or a related discipline. Experience: 2-5 years of professional experience as a Data Scientist or in a similar role, with a proven track record of team leadership. Technical Skills: Strong proficiency in SQL for data querying and manipulation. Strong proficiency in Python and its data science libraries (e.g., Pandas, NumPy, Scikit-learn). Proven experience with the ETL process and building data pipelines. Data Visualization: In-depth knowledge of and hands-on experience with the Tableau suite, including Tableau Prep, Desktop, and Server . Machine Learning: Deep knowledge of machine learning concepts and algorithms (e.g., regression, classification, clustering, time series analysis). Statistical Analysis: Strong foundation in statistical concepts, including hypothesis testing, experimental design, and predictive modeling. Preferred Skills & Experience: Experience with big data technologies (e.g., Spark, Hadoop). Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data science services. Knowledge of MLOps tools and practices (e.g., Docker, Kubernetes, MLflow, Airflow). Experience with Large Language Models (LLM) and Natural Language Processing (NLP). Proven experience in a team leadership or mentorship role. Job Types: Full-time, Permanent Pay: From ₹450,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person

Posted 1 week ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Company Overview Founded in 2010, we've been recognized as a "Best Places to Work" and have offices in the US (Boulder), UK (London) and India (Chennai). However, we are a remote-first company with employees across the globe! Today, we are a leading B2B marketing provider that offers two distinct solutions: Integrate Lead management & data governance SaaS platform for marketing operations and demand marketers. The Integrate platform makes every lead clean, compliant, and actionable, freeing enterprise B2B marketers from bad data and operational headaches so they can focus on what matters: generating revenue. Pipeline360 Media solutions that combine three powerful demand generation tools: targeted display, content syndication, and a comprehensive marketplace model. Pipeline360 ensures that marketers achieve 100% compliant and marketable leads by effectively engaging with audiences much earlier in the buying cycle, connecting with buyers at every stage of the process, and optimizing programs to drive performance. Our Mission Integrate: exists to make your lead data marketable so you can drive pipeline. Pipeline360: exists to make the unpredictable predictable. Why us? We are an organization of integrity, talent, passion, and vision with a long track record of growth, customer success, and a commitment to driving leading innovation and delivering world-class customer experience. The Role: Integrate's data is treated as a critical corporate asset and is seen as a competitive advantage in our business. As a Lead Data Engineer you will be working in one of the world's largest cloud-based data lakes. You should be skilled in the architecture of data warehouse solutions for the Enterprise using multiple platforms (EMR, RDBMS, Columnar, Cloud, Snowflake). You should have extensive experience in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. Responsibilities: Design and develop workflows, programs, and ETL to support data ingestion, curation, and provisioning of fragmented data for Data Analytics, Product Analytics and AI. Work closely with Data Scientists, Software Engineers, Product Managers, Product Analysts and other key stakeholders to gather and define requirements for Integrate's data needs. Use Scala, SQL Snowflake, and BI tools to deliver data to customers. Understand MongoDB/PostgreSQL and transactional data workflows. Design data models and build data architecture that enables reporting, analytics, advanced AI/ML and Generative AI solutions. Develop an understanding of the data and build business acumen. Develop and maintain Datawarehouse and Datamart in the cloud using Snowflake. Create reporting dashboards for internal and client stakeholders. Understand the business use cases and customer value behind large sets of data and develop meaningful analytic solutions. Basic Qualifications: Advanced degree in Statistics, Computer Science or related technical/scientific field. 9+ years experience in a Data Engineer development role. Advanced knowledge of SQL, Python, and data processing workflow. Nice to have Spark/Scala, MLFlow, and AWS experience. Strong experience and advanced technical skills writing APIs. Extensive knowledge of Data Warehousing, ETL and BI architectures, concepts, and frameworks. And also strong in metadata definition, data migration and integration with emphasis on both high end OLTP and business Intelligence solutions. Develop complex Stored procedure and queries to provide to the application along with reporting solutions too. Optimize slow-running queries and optimize query performance. Create optimized queries and data migration scripts Leadership skillsets to mentor and train junior team members and stakeholders. Capable of creating long-term and short-term data architecture vision and tactical roadmap to achieve the data architecture vision beginning from the current state Strong data management abilities (i.e., understanding data reconciliations). Capable of facilitating data discovery sessions involving business subject matter experts. Strong communication/partnership skills to gain the trust of stakeholders. Knowledge of professional software engineering practices & best practices for the full software development lifecycle, including coding standards, code reviews, source control management, build processes, testing, and operations. Preferred Qualifications: Industry experience as a Data Engineer or related specialty (e.g., Software Engineer, Business Intelligence Engineer, Data Scientist) with a track record of manipulating, processing, and extracting value from large datasets. Experience building data products incrementally and integrating and managing datasets from multiple sources. Query performance tuning skills using Unix profiling tools and SQL Experience leading large-scale data warehousing and analytics projects, including using AWS technologies – Snowflake, Redshift, S3, EC2, Data-pipeline and other big data technologies. Integrate in the News: Best Tech Startups in Arizona (2018-2021) Integrate Acquires Akkroo Integrate Acquires ListenLoop Why Four MarTech CEO's Bet Big on Integrate

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Who You'll Work With Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you’ll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package, which includes medical, dental, mental health, and vision coverage for you, your spouse/partner, and children. Your Impact You will work in multi-disciplinary global Life Science focused environments, harnessing data to provide real-world impact for organizations globally. Our Life Sciences practice focuses on helping clients bring life-saving medicines and medical treatments to patients. This practice is one of the fastest growing practices and is comprised of a tight-knit community of consultants, research, solution, data, and practice operations colleagues across the firm. It is also one of the most globally connected sector practices, offering ample global exposure. The LifeSciences.AI (LS.AI) team is the practice’s assetization arm, focused on creating reusable digital and analytics assets to support our client work. LS.AI builds and operates tools that support senior executives in pharma and device manufacturers, for whom evidence-based decision-making and competitive intelligence are paramount. The team works directly with clients across Research & Development (R&D), Operations, Real World Evidence (RWE), Clinical Trials and Commercial to build and scale digital and analytical approaches to addressing their most persistent priorities. What you’ll learn: How to apply data and machine learning engineering, as well as product development expertise, to address complex client challenges through part-time staffing on client engagements. How to support the manager of data and machine learning engineering in developing a roadmap for data and machine learning engineering assets across cell-level initiatives. How to productionalize AI prototypes and create deployment-ready solutions. How to translate engineering concepts and explain design/architecture trade-offs and decisions to senior stakeholders. How to write optimized code to enhance our AI Toolbox and codify methodologies for future deployment. How to collaborate effectively within a multi-disciplinary team. How to leverage new technologies and apply problem-solving skills in a multicultural and creative environment. You will work on the frameworks and libraries that our teams of Data Scientists and Data Engineers use to progress from data to impact. You will guide global companies through analytics solutions to transform their businesses and enhance performance across industries including life sciences, global energy and materials (GEM), and advanced industries (AI) practices. Real-World Impact – We provide unique learning and development opportunities internationally. Fusing Tech & Leadership – We work with the latest technologies and methodologies and offer first class learning programs at all levels. Multidisciplinary Teamwork - Our teams include data scientists, engineers, project managers, UX and visual designers who work collaboratively to enhance performance. Innovative Work Culture – Creativity, insight and passion come from being balanced. We cultivate a modern work environment through an emphasis on wellness, insightful talks and training sessions. Striving for Diversity – With colleagues from over 40 nationalities, we recognize the benefits of working with people from all walks of life. Your Qualifications and Skills Bachelor's degree in computer science or related field; master's degree is a plus 3+ years of relevant work experience Experience with at least one of the following technologies: Python, Scala, Java, C++ & ability to write production code and object-oriented programming Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL / NoSQL is very much expected Commercial client- facing project experience is helpful, including working in close-knit teams Additional expertise with Python testing frameworks, data validation and data quality frameworks, feature engineering, chunking, document ingestion, graph data structures (i.e., Neo4j), basic K8s (manifests, debugging, docker, Argo Workflows), MLflow deployment and usage, generative AI frameworks (LangChain), GPUs is a plus Ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets Proven ability in clearly communicating complex solutions; strong attention to detail Understanding of information security principles to ensure compliant handling and management of client data Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks Experience with cloud development platforms such as AWS, Azure, Google (and appropriate Bash/Shell scripting) Good to have experience in CI/CD using GitHub Actions or CircleCI or any other CI/CD tech stack and experience in end to end pipeline development including application deployment

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Data Scientist Location: Bengaluru, Gurugram Experience: 5+ Years About the Role: We are looking for an experienced Data Scientist to develop and deploy AI/ML and GenAI solutions across large datasets using modern frameworks and cloud infrastructure. You’ll work closely with cross-functional teams to translate business requirements into impactful data products. Key Responsibilities: Collaborate with software engineers, stakeholders, and domain experts to define data-driven solutions. Develop, implement, and deploy AI/ML, NLP/NLU, and deep learning models. Work on preprocessing, analyzing large datasets, and deriving actionable insights. Evaluate and optimize models for performance, efficiency, and scalability. Deploy solutions to cloud platforms such as Azure (preferred), AWS, or GCP. Monitor production models and iterate for continuous improvement. Document processes, results, and best practices. Must-Have Skills: Bachelor's/Master’s in Computer Science, Data Science, Engineering, or related field. Strong programming in Python and SQL . Experience with Scikit-learn, TensorFlow, PyTorch , etc. Knowledge of ETL tools like Azure Data Factory , Databricks , Data Lake . Solid foundation in mathematics, probability, and statistics . Exposure to GenAI , Vector Databases , and LLMs . Experience working with cloud infrastructure ( Azure preferred ). Good-to-Have Skills: Experience with Flask , Django , or Streamlit . Knowledge of MLOps tools : MLFlow, Kubeflow, CI/CD. Familiarity with Docker , Kubernetes for model/container deployment. #teceze

Posted 1 week ago

Apply

0 years

0 Lacs

Sadar, Uttar Pradesh, India

On-site

Summary: We are seeking a talented and motivated AI Engineer to join our team and focus on building cutting-edge Generative AI applications. The ideal candidate will possess a strong background in data science, machine learning, and deep learning, with specific experience in developing and fine-tuning Large Language Models (LLMs) and Small Language Models (SLMs). You should be comfortable managing the full lifecycle of AI projects, from initial design and data handling to deployment and production monitoring. A foundational understanding of software engineering principles is also required to collaborate effectively with engineering teams and ensure robust deployments. Responsibilities: Design, develop, and implement Generative AI solutions, including applications leveraging Retrieval-Augmented Generation (RAG) techniques. Fine-tune existing Large Language Models (LLMs) and potentially develop smaller, specialized language models (SLMs) for specific tasks. Manage the end-to-end lifecycle of AI model development, including data curation, feature extraction, model training, validation, deployment, and monitoring. Research and experiment with state-of-the-art AI/ML/DL techniques to enhance model performance and capabilities. Build and maintain scalable production pipelines for AI models. Collaborate with data engineering and IT teams to define deployment roadmaps and integrate AI solutions into existing systems. Develop AI-powered tools to solve business problems, such as summarization, chatbots, recommendation systems, or code assistance. Stay updated with the latest advancements in Generative AI, machine learning, and deep learning. Qualifications: Proven experience as a Data Scientist, Machine Learning Engineer, or AI Engineer with a focus on LLMs and Generative AI. Strong experience with Generative AI techniques and frameworks (e.g., RAG, Fine-tuning, Langchain, LlamaIndex, PEFT, LoRA). Solid foundation in machine learning (e.g., Regression, Classification, Clustering, XGBoost, SVM) and deep learning (e.g., ANN, LSTM, RNN, CNN) concepts and applications. Proficiency in Python and relevant libraries (e.g., Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch). Experience with data science principles, including statistics, hypothesis testing, and A/B testing. Experience deploying and managing models in production environments (e.g., using platforms like AWS, Databricks, MLFlow). Familiarity with data handling and processing tools (e.g., SQL, Spark/PySpark). Basic understanding of software engineering practices, including version control (Git) and containerization (Docker). Bachelor's or master’s degree in computer science, Artificial Intelligence, Data Science, or a related quantitative field. Preferred Skills: Experience building RAG-based chatbots or similar applications. Experience developing custom SLMs. Experience with MLOps principles and tools (e.g., MLFlow, Airflow). Experience migrating ML workflows between cloud platforms. Familiarity with vector databases and indexing techniques. Experience with Python web frameworks (e.g., Django, Flask). Experience building and integrating APIs (e.g., RESTful APIs). Basic experience with front-end development or UI building for showcasing AI applications. Qualifications Bachelorʼs or Masterʼs degree in Computer Science, Engineering, or a related discipline.

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

Remote

NOTE: Timings: 4-6 hours overlap with GMT (Arabian Standard Time) Position 1: MLOps Engineer Experience: 5–8 years Contract Duration: 6 months (extendable) Budget: 1 Lakh / Month fixed Location: Remote Must-Have Skills (5–8 Years): Strong experience with MLOps tools and frameworks (e.g., MLflow, Kubeflow, SageMaker) Proficiency in CI/CD pipeline creation for ML workflows Hands-on experience with containerization tools such as Docker and orchestration using Kubernetes Good understanding of cloud platforms (AWS/GCP/Azure) for deploying and managing ML models Expertise in monitoring, logging, and model versioning Familiarity with data pipeline orchestration tools (Airflow, Prefect, etc.) Strong Python programming skills and experience with ML libraries (scikit-learn, TensorFlow, PyTorch, etc.) Knowledge of model performance tuning and retraining strategies Ability to collaborate closely with Data Scientists, DevOps, and Engineering teams

Posted 1 week ago

Apply

3.0 years

0 Lacs

New Delhi, Delhi, India

On-site

We are looking for a skilled and driven Machine Learning Engineer with 3+ years of experience to join our core AI development team. The role involves building and optimising machine learning models, developing data pipelines, and contributing to AI-powered features used in financial crime detection, document intelligence, and compliance analytics. This is a high-impact role for someone passionate about applying machine learning in real-world applications and eager to grow in a fast-paced environment. Responsibilities Build, test, and deploy machine learning models for various data-driven use cases Develop scalable data preprocessing and feature engineering pipelines Collaborate with product and engineering teams to integrate AI functionalities into production systems Analyze model performance, run experiments, and fine-tune algorithms for accuracy and speed Contribute to maintaining version-controlled ML workflows and reusable modules Assist in preparing training datasets from structured and unstructured sources Support model monitoring, feedback loop integration, and continuous improvements Technical Skills Required Strong foundation in Python and applied machine learning Experience with one or more ML frameworks such as scikit-learn, TensorFlow, or PyTorch Understanding of supervised, unsupervised, and basic NLP techniques Familiarity with building data pipelines and working with large datasets (Pandas, NumPy, SQL) Exposure to model deployment concepts, APIs, and inference Basic knowledge of Git, Docker, and cloud environments (AWS/GCP/Azure) Preferred (Nice to Have) Experience with document data (PDFs, scanned files) or OCR tools Exposure to transformer-based models or LLMs Familiarity with ML lifecycle tools (MLflow, DVC, Weights & Biases) Prior experience in fintech, risk analysis, or fraud detection systems Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field 3+ years of hands-on experience in ML/AI development roles Compensation & Growth Competitive compensation aligned with industry benchmarks ESOPs and performance bonuses for high performers Exposure to real-world AI use cases with domain experts in fintech, compliance, and GenAI Opportunity to grow into senior research or MLOps roles based on performance and interest

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies