Jobs
Interviews

1726 Mlflow Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Artificial Intelligence Engineer Location: Noida (In-Office) Employment Type: Full-time, Permanent Job ID: AI-007 About Us We are a cybersecurity product company building next-generation platforms for threat detection, malware sandboxing, SIEM, and real-time telemetry analytics. As we scale the intelligence layer across our ecosystem, we are looking for an AI Developer to lead the design, development, and integration of AI-driven services into our products. Key Responsibilities Design and develop AI/ML-based services for cybersecurity applications such as threat detection, malware classification, behavior analysis, log correlation, and anomaly detection. Seamlessly integrate AI solutions into existing platforms including our sandbox system, SIEM, and T-Sense telemetry engine. Build, deploy, and maintain AI microservices/APIs using tools like FastAPI or Flask for scalable inference and automation. Fine-tune or train models using real-world security datasets; optimize performance for production environments. Leverage LLMs, vector search , or prompt-engineered workflows where applicable to enhance product capabilities. Collaborate closely with product managers, backend developers, and security researchers to map real use-cases into applied AI features. Stay current with advancements in AI/ML, especially in the context of security and adversarial defense. Required Skills Proficiency in Python and core AI/ML libraries: TensorFlow, PyTorch, scikit-learn, HuggingFace, etc. Experience building, training, and serving machine learning models in production . Solid understanding of cybersecurity concepts , logs, behavioral analysis, or threat classification. Hands-on experience with LLM APIs , embeddings, and integration of vector databases (e.g., FAISS, Weaviate, Pinecone). Ability to build AI solutions with Docker, REST APIs , and deploy in real-time environments. Comfortable converting AI prototypes into modular, testable, and scalable backend services. Nice-to-Have Past experience developing custom AI models for niche or production use-cases. Familiarity with SIEM platforms, malware datasets , log pipelines, or SOC workflows. Experience with MLOps tools like MLflow, DVC, or Kubeflow. Open-source contributions or public AI/security projects are a big plus. Preferred Experience 2+ years in applied AI/ML development, preferably with exposure to cybersecurity products. Bachelor's or Master’s degree in Computer Science, Data Science, AI, or a related field. Portfolio, GitHub, or case studies demonstrating applied AI projects or systems. Why Join Us Work on cutting-edge AI systems in a cybersecurity-first environment. Opportunity to influence and own AI components across multiple products. Solve complex, real-world threats using AI at scale. In-office collaboration with a focused product-engineering team based in Noida. Competitive salary, learning budget, product ownership, and growth opportunities. How to Apply Fill in the form for applying: https://forms.gle/yFSoycUj46SDFjuq8

Posted 2 weeks ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Chennai

Work from Office

Job Summary We are seeking a strategic and innovative Senior Data Scientist to join our high-performing Data Science team. In this role, you will lead the design, development, and deployment of advanced analytics and machine learning solutions that directly impact business outcomes. You will collaborate cross-functionally with product, engineering, and business teams to translate complex data into actionable insights and data products. Key Responsibilities Lead and execute end-to-end data science projects, encompassing problem definition, data exploration, model creation, assessment, and deployment. Develop and deploy predictive models, optimization techniques, and statistical analyses to address tangible business needs. Articulate complex findings through clear and persuasive storytelling for both technical experts and non-technical stakeholders. Spearhead experimentation methodologies, such as A/B testing, to enhance product features and overall business outcomes. Partner with data engineering teams to establish dependable and scalable data infrastructure and production-ready models. Guide and mentor junior data scientists, while also fostering team best practices and contributing to research endeavors. Required Qualifications & Skills: Masters or PhD in Computer Science, Statistics, Mathematics, or a related 5+ years of practical experience in data science, including deploying models to Expertise in Python and SQL; Solid background in ML frameworks such as scikit-learn, TensorFlow, PyTorch, and Competence in data visualization tools like Tableau, Power BI, matplotlib, and Comprehensive knowledge of statistics, machine learning principles, and experimental Experience with cloud platforms (AWS, GCP, or Azure) and Git for version Exposure to MLOps tools and methodologies (e.g., MLflow, Kubeflow, Docker, CI/CD). Familiarity with NLP, time series forecasting, or recommendation systems is a Knowledge of big data technologies (Spark, Hive, Presto) is desirable Timings:1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri)

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Key Responsibilities Develop and manage end-to-end ML pipelines from training to production. Automate model training, validation, and deployment using CI/CD. Ensure scalability and reliability of AI systems with Docker & Kubernetes. Optimize ML model performance for low latency and high availability. Design and maintain cloud-based/hybrid ML infrastructure. Implement monitoring, logging, and alerting for deployed models. Ensure security, compliance, and governance in AI model deployments. Collaborate with Data Scientists, Engineers, and Product Managers. Define and enforce MLOps best practices (versioning, reproducibility, rollback). Maintain model registry and conduct periodic pipeline reviews. Experience & Skills 5+ years in MLOps, DevOps, or Cloud Engineering. Strong experience in ML model deployment, automation, and monitoring. Proficiency in Kubernetes, Docker, Terraform, and cloud platforms (AWS, Azure, GCP). Hands-on with CI/CD tools (GitHub Actions, Jenkins). Expertise in ML frameworks (TensorFlow, PyTorch, MLflow, Kubeflow). Understanding of APIs, microservices, and infrastructure-as-code. Experience with monitoring tools (Prometheus, Grafana, ELK, Datadog). Strong analytical and debugging skills. Preferred: Cloud or MLOps certifications, real-time ML, ETL, and AI ethics knowledge. Tools & Technologies Cloud & Infrastructure: AWS, GCP, Azure, Terraform, Kubernetes, Docker. MLOps & Model Management: MLflow, Kubeflow, TFX, SageMaker. CI/CD & Automation: GitHub Actions, Jenkins, ArgoCD, Airflow. Monitoring & Logging: Prometheus, Grafana, ELK Stack, Datadog. Collaboration & Documentation: Slack, Confluence, JIRA, Notion. Why Join Yavar? Join us at an exciting growth phase, working with cutting-edge AI technology and talented teams. This role offers competitive compensation, equity participation, and the opportunity to shape a world-class engineering organization. Your leadership will be crucial in turning our innovative vision into reality, creating products that reshape how enterprises harness artificial intelligence. *At Yavar, talent knows no boundaries. While experience matters, we value your drive for excellence and ability to execute. Ready to build the future of enterprise AI? We're eager to start a conversation.* To apply, please contact: digital@yavar.ai Location: Chennai / Coimbatore.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

FEQ326R365 As a Manager, Field Engineering (Data & AI) (Strategic Accounts, Data & AI) will lead a team of Solutions Architects (Data & AI) focusing on large enterprise & strategic customers across Europe with a presence in India. This is a unique role where you will help to build and lead our technical pre-sales team across India that is dedicated to supporting customers that are head quartered in Europe and the UK. Leading a team in India, will require significant collaboration and partnership with teams in the UK, France, Germany and the rest of Europe. Your experience partnering with the sales organisation will help close revenue with opportunities of +$1M ARR with the right approach whilst coaching new sales and pre-sales team members to work together. You will guide and get involved to enhance your team's effectiveness; be an expert at positioning and articulating business-value focused solutions to our customers and prospects; support various stages of the sales cycles; and build relationships with key stakeholders in large corporations. The Impact You Will Have Manage hiring, building the Pre-Sales team consisting of Solutions Architects in the Data & AI domain. Rapidly scale the designated Field Engineering segment organisation without sacrificing calibre. Build a collaborative culture within a rapid-growth team. To embody and promote Databricks' customer-obsessed, teamwork and diverse culture. Support increasing Return on Investment of SA involvement in sales cycles by 2-3x over 18 months. Promote a solution and value-based selling field-engineering organisation. Coach and mentor the Solutions Architect team to understand our customers’ business needs and identify revenue potential in their accounts. Interface with leadership & C-suite stakeholders at strategic customers in the assigned region to position the strength of Databricks, the comprehensive solutions strategy, and build trust and credibility in the account. Build Databricks' brand in India in partnership with the Marketing and Sales team Bring the experience, priorities, and takeaways of the field engineering team to the planning and strategy roadmap of the organisation. What We Look For Proven experience in successfully building and managing a presales team Technical or consulting background either in Data Engineering, Database technologies or Data Science Proven experience in driving strategic planning and accurately forecast sales trends in a consumption-driven business Ability to partner and collaborate with Sales and other cross-functional leaders Success in instituting processes for technical field members to lead efficiency and Data-driven innovation Passionate about data and AI market and Cloud software models and being able to deliver a strong POV on the value of Databricks solutions About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

GAQ326R278 We are looking for an experienced Cash Application Analyst to manage and optimize the cash application process at our tech company, which operates on a usage-based billing model. The ideal candidate will ensure the accurate and timely application of customer payments to invoices , contribute to efficient cash flow management, and collaborate with key stakeholders to enhance processes. This role requires a meticulous professional with strong analytical skills, experience in high-volume cash application environments, and an ability to navigate in a fast paced environment. This position will be catering to the US (EST) timezone . The Impact You Will Have Oversee the daily cash application process, ensuring customer payments are accurately and promptly applied to invoices Monitor and reconcile incoming payments across multiple payment channels, including ACH, wire transfers, credit card transactions, and checks. Collaborate closely with the billing and collections teams to resolve discrepancies and support seamless end-to-end cash management Develop and implement cash application policies and procedures that align with the unique aspects of a usage-based billing system Identify and address payment discrepancies, customer account issues, and unapplied cash, facilitating timely resolutions Maintain and update comprehensive documentation of cash application processes and customer payment records Generate reports on cash application metrics, providing actionable insights to senior finance leadership Participate in system enhancements and software implementations to improve cash application automation and efficiency Liaise with the customer support team to handle customer inquiries related to payments and account reconciliations Drive continuous process improvements and leverage technology to enhance accuracy, reduce processing times, and streamline operations Ensure compliance with company policies and relevant financial regulations What We Look For Bachelor’s degree in Finance, Accounting, Business Administration, or a related field preferred Minimum of 4 years of experience in cash application, accounts receivable, or related financial operations Strong understanding of usage-based billing models and associated financial processes Must have in-depth NetSuite experience and advanced Excel skills SaaS Cash App experience preferred Excellent attention to detail and problem-solving skills Proficiency with ERP systems and payment processing platforms Strong analytical skills with the ability to interpret data and generate reports Effective communication and interpersonal skills for collaboration with cross-functional teams Operate in a fast-paced environment with tight deadlines Experience in handling complex reconciliations and certifications This role will be in EST hours - 6pm IST onwards. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Posted 2 weeks ago

Apply

5.0 years

15 - 20 Lacs

Thiruvananthapuram Taluk, India

Remote

Are you passionate about building AI systems that create real-world impact? We are hiring a Senior AI Engineer with 5+ years of experience to design, develop, and deploy cutting-edge AI/ML solutions. 📍 Location: [Trivandrum / Kochi / Remote – customize based on your need] 💼 Experience: 5+ years 💰 Salary: ₹15–20 LPA 🚀 Immediate Joiners Preferred 🔧 What You’ll Do Design and implement ML/DL models for real business problems Build data pipelines and perform preprocessing for large datasets Use advanced techniques like NLP, computer vision, reinforcement learning Deploy AI models using MLOps best practices Collaborate with data scientists, developers & product teams Stay ahead of the curve with the latest research and tools ✅ What We’re Looking For 5+ years of hands-on AI/ML development experience Strong in Python, with experience in TensorFlow, PyTorch, Scikit-learn, Hugging Face Knowledge of NLP, CV, DL architectures (CNNs, RNNs, Transformers) Experience with cloud platforms (AWS/GCP/Azure) and AI services Solid grasp of MLOps, model versioning, deployment, monitoring Strong problem-solving, communication, and mentoring skills 💻 Tech Stack You’ll Work With Languages: Python, SQL Libraries: TensorFlow, PyTorch, Keras, Transformers, Scikit-learn Tools: Git, Docker, Kubernetes, MLflow, Airflow Platforms: AWS, GCP, Azure, Vertex AI, SageMaker Skills: cloud platforms (aws, gcp, azure),docker,computer vision,git,pytorch,airflow,hugging face,nlp,ml,ai,deep learning,kubernetes,mlflow,mlops,tensorflow,scikit-learn,python,machine learning

Posted 2 weeks ago

Apply

4.0 - 9.0 years

0 - 1 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Hi Pleasae Find JD and send me your updated Resume Bachelor's or Masters degree in Computer Science, Data Science, Engineering, or a related field. 3+ years of experience in MLOps, DevOps, or ML Engineering roles. Strong experience with containerization (Docker) and orchestration (Kubernetes). Proficiency in Python and experience working with ML libraries like TensorFlow, PyTorch, or scikit-learn. Familiarity with ML pipeline tools such as MLflow, Kubeflow, TFX, Airflow, or SageMaker Pipelines. Hands-on experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code tools (Terraform, CloudFormation). Solid understanding of CI/CD principles, especially as applied to machine learning workflows. Nice-to-Have Experience with feature stores, model registries, and metadata tracking. Familiarity with data versioning tools like DVC or LakeFS. Exposure to data observability and monitoring tools. Knowledge of responsible AI practices including fairness, bias detection, and explainability.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Job Title: Python Developer (AI/ML Projects – 5–7 Years Experience) Location: Onsite – Indore, India Job Type: Full-time Experience Required: 5 to 7 years Notice Period: Immediate Joiners Preferred About the Role: We are seeking an experienced Python Developer with 5–7 years of professional experience, including hands-on project work in Artificial Intelligence (AI) and Machine Learning (ML) . The ideal candidate should have strong backend development skills along with a solid foundation in AI/ML, capable of designing scalable solutions and deploying intelligent systems. Key Responsibilities: Design, develop, and maintain backend applications using Python. Build and integrate RESTful APIs and third-party services. Work on AI/ML projects including model development, training, deployment, and performance tuning. Collaborate with Data Scientists and ML Engineers to implement and productionize machine learning models. Manage data pipelines and model lifecycle using tools like MLflow or similar. Write clean, testable, and efficient code using Python best practices. Work with relational and NoSQL databases such as PostgreSQL, MySQL, MongoDB, etc. Participate in code reviews, architecture discussions, and agile ceremonies. Required Skills & Experience: 5–7 years of hands-on Python development experience. Strong experience with frameworks such as Django, Flask, or FastAPI. Proven track record of working on AI/ML projects (end-to-end model lifecycle). Good understanding of machine learning libraries like Scikit-learn, TensorFlow, Keras, PyTorch, etc. Experience with data preprocessing, model training, evaluation, and deployment. Familiarity with data handling tools: Pandas, NumPy, etc. Working knowledge of REST API development and integration. Experience with Docker, Git, CI/CD, and cloud platforms (AWS/GCP/Azure). Familiarity with databases – SQL and NoSQL. Experience with model tracking tools like MLflow or DVC is a plus. Preferred Qualifications: Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. Experience with cloud-based AI/ML services (AWS SageMaker, Azure ML, GCP AI Platform). Exposure to MLOps practices and tools is highly desirable. Understanding of NLP, Computer Vision, or Generative AI concepts is a plus.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About Adsparkx Adsparkx is a leading Global Performance Marketing Agency headquartered in India. We have been empowering brands since 2014 helping them acquire high quality and engaging users globally via data-driven decisions. We are innovators, hustlers and ad-tech moguls/experts who function with the belief of catalyzing a disruptive change in the industry by providing empowered and customized digital experiences to consumers/brands. Adsparkx unlocks the full potential of your business with its diligent workforce, catering to worldwide clients at their time zones. We operate globally and have offices in Gurgaon, Chandigarh, Singapore and US. We value partnerships and have maintained sustainable relationships with reputed brands, shaping their success stories through services like Affiliate Marketing, Branding, E-commerce, Lead Generation, and Programmatic Media Buying. We have helped navigate over 200 brands to success. Our clientele includes names like Assurance IQ, Inc, Booking.com, Groupon, etc. If you wish to change the game of your brand, visit us here- https://adsparkx.com/ Job Title: Ai Engineer Location: Gurugram, Haryana Employment Type: Full-Time Experience Required: 3-5 Years Objective Of The Role We are seeking a highly skilled Ai Enginee r who will be responsible for building, testing, and maintaining robust, scalable, and secure web applications. The ideal candidate will have strong expertise in Python and Django , with additional exposure to machine learning , generative AI frameworks , and modern deep learning architectures . This role involves optimizing performance, ensuring security, working with APIs, and collaborating closely with cross-functional teams to deliver high-quality backend solutions. Key Responsibilities 3-6 years of hands-on experience in Python. Design, develop, and maintain backend services and RESTful APIs using Django or Django REST Framework. Work with third-party APIs and external services to ensure smooth data integration. Optimize application performance and implement robust security practices. Design scalable and efficient data models; work with relational and NoSQL databases. Implement and maintain CI/CD pipelines using tools like Docker, Git, Jenkins, or GitHub Actions. Collaborate with front-end developers, DevOps engineers, and product managers to deliver end-to-end solutions. Integrate and deploy ML models and AI features in production environments (a strong plus). Write clean, modular, and testable code following best practices. Troubleshoot, debug, and upgrade existing systems. Required Skills And Qualifications Strong proficiency in Python and Django framework. Experience with PostgreSQL, MongoDB, or MySQL. Familiarity with Docker, Gunicorn, Nginx, and CI/CD pipelines. Experience with machine learning and deep learning concepts. Exposure to Generative AI, Transformers, Agentic Frameworks, and Fine-Tuning techniques. Hands-on experience with PyTorch or TensorFlow (PyTorch preferred). Ability to translate ML/AI solutions into production-ready APIs or services. Strong problem-solving and debugging skills. Nice To Have Knowledge of FastAPI or Flask. Experience deploying models via TorchServe or ONNX. Familiarity with MLOps practices and tools like MLflow, DVC, or SageMaker. If you're passionate about backend development and excited to work at the intersection of software engineering and AI innovation , we’d love to hear from you.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu

Remote

Title : Senior AI Developer Years of Experience : 8+ years *Location: The selected candidate is required to work onsite at our Chennai location for the initial three-month project training and execution period. After the three months , the candidate will be offered remote opportunities.* Job Description The Senior AI Developer will be responsible for designing, building, training, and deploying advanced artificial intelligence and machine learning models to solve complex business challenges across industries. This role demands a strategic thinker and hands-on practitioner who can work at the intersection of data science, software engineering, and innovation. The candidate will contribute to scalable production-grade AI pipelines and mentor junior AI engineers within the Center of Excellence (CoE). Key responsibilities · Design, train, and fine-tune deep learning models (NLP, CV, LLMs, GANs) for high-value applications · Architect AI model pipelines and implement scalable inference engines in cloud-native environments · Collaborate with data scientists, engineers, and solution architects to productionize ML prototypes · Evaluate and integrate pre-trained models like GPT-4o, Gemini, Claude, and fine-tune based on domain needs · Optimize algorithms for real-time performance, efficiency, and fairness · Write modular, maintainable code and perform rigorous unit testing and validation · Contribute to AI codebase management, CI/CD, and automated retraining infrastructure · Research emerging AI trends and propose innovative applications aligned with business objectives Technical Skills · Expert in Python, PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers · LLM deployment & tuning: OpenAI (GPT), Google Gemini, Claude, Falcon, Mistral · Experience with RESTful APIs, Flask/FastAPI for AI service exposure · Proficient in Azure Machine Learning, Databricks, MLflow, Docker, Kubernetes · Hands-on experience with vector databases, prompt engineering, and retrieval-augmented generation (RAG) · Knowledge of Responsible AI frameworks (bias detection, fairness, explainability) Qualification · Master’s in Artificial Intelligence, Machine Learning, Data Science, or Computer Engineering · Certifications in AI/ML (e.g., Microsoft Azure AI Engineer, Google Professional ML Engineer) preferred · Demonstrated success in building scalable AI applications in production environments · Publications or contributions to open-source AI/ML projects are a plus Job Types: Full-time, Permanent Work Location: Hybrid remote in Chennai, Tamil Nadu Expected Start Date: 14/07/2025

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Education: Bachelor's degree in Computer Science, Data Science, Artificial Intelligence, or a related certifications or experience in NLP & CV. Years of Experience: Minimum 2 years of experience in Deep Learning, NLP,CV, MLOps and its related technologies. Responsibilities: - Design, develop, and deploy state-of-the-art NLP and CV models and algorithm - Collaborate with cross-functional teams to understand requirements and develop customised NLP & CV solutions and have experience in building the python backend using Flask / Django. - Database integration preferably using MongoDB or any other vector databases. - Maintain and improve the performance, accuracy, and efficiency of existing AI/ML models and their deployment on Cloud platforms (AWS) and monitor their performance using MLOps tools such as MLFlow, DVC. - Experience in building End to End data pipelines - Stay updated with emerging AI/ML technologies, LLM’s ,RAG - Conduct regular performance evaluations of AI/ML models using production grade MLOps solutions. - Troubleshoot and resolve any issues arising from the implementation of NLP & CV models. - Develop and monitor the code that runs in production environments using MLOps practices. Requirements: - Strong experience with Deep learning and NLP frameworks such as TensorFlow or other open-source machine learning frameworks. - Experience of using both tensorflow and pytorch frameworks - Proficient in programming languages, such as Python or Java, and experience with AI/ML libraries. - Familiarity with the integration of APIs, such as REST API, OpenAI API, for implementing advanced AI-driven features. - Solid understanding of Machine learning and Deep learning algorithms, concepts, and best practices in a production environment using MLOps. - Experience with big data technologies, such as Hadoop and Spark, is a plus. - Strong problem-solving skills. - Excellent communication and teamwork skills, with the ability to collaborate effectively with team members from various disciplines. - Eagerness to learn and adapt to new technologies and industry trends.

Posted 2 weeks ago

Apply

0.0 - 6.0 years

0 Lacs

Gurugram, Haryana

On-site

About Adsparkx: Adsparkx is a leading Global Performance Marketing Agency headquartered in India. We have been empowering brands since 2014 helping them acquire high quality and engaging users globally via data-driven decisions. We are innovators, hustlers and ad-tech moguls/experts who function with the belief of catalyzing a disruptive change in the industry by providing empowered and customized digital experiences to consumers/brands. Adsparkx unlocks the full potential of your business with its diligent workforce, catering to worldwide clients at their time zones. We operate globally and have offices in Gurgaon, Chandigarh, Singapore and US. We value partnerships and have maintained sustainable relationships with reputed brands, shaping their success stories through services like Affiliate Marketing, Branding, E-commerce, Lead Generation, and Programmatic Media Buying. We have helped navigate over 200 brands to success. Our clientele includes names like Assurance IQ, Inc, Booking.com, Groupon, etc. If you wish to change the game of your brand, visit us here- https://adsparkx.com Job Title: Ai Engineer Location: Gurugram, Haryana Employment Type: Full-Time Experience Required: 3-5 Years Objective of the Role: We are seeking a highly skilled Ai Enginee r who will be responsible for building, testing, and maintaining robust, scalable, and secure web applications. The ideal candidate will have strong expertise in Python and Django , with additional exposure to machine learning , generative AI frameworks , and modern deep learning architectures . This role involves optimizing performance, ensuring security, working with APIs, and collaborating closely with cross-functional teams to deliver high-quality backend solutions. Key Responsibilities: 3-6 years of hands-on experience in Python. Design, develop, and maintain backend services and RESTful APIs using Django or Django REST Framework. Work with third-party APIs and external services to ensure smooth data integration. Optimize application performance and implement robust security practices. Design scalable and efficient data models; work with relational and NoSQL databases. Implement and maintain CI/CD pipelines using tools like Docker, Git, Jenkins, or GitHub Actions. Collaborate with front-end developers, DevOps engineers, and product managers to deliver end-to-end solutions. Integrate and deploy ML models and AI features in production environments (a strong plus). Write clean, modular, and testable code following best practices. Troubleshoot, debug, and upgrade existing systems. Required Skills and Qualifications: Strong proficiency in Python and Django framework. Experience with PostgreSQL , MongoDB , or MySQL . Familiarity with Docker , Gunicorn , Nginx , and CI/CD pipelines. Experience with machine learning and deep learning concepts. Exposure to Generative AI , Transformers , Agentic Frameworks , and Fine-Tuning techniques . Hands-on experience with PyTorch or TensorFlow (PyTorch preferred). Ability to translate ML/AI solutions into production-ready APIs or services. Strong problem-solving and debugging skills. Nice to Have: Knowledge of FastAPI or Flask. Experience deploying models via TorchServe or ONNX. Familiarity with MLOps practices and tools like MLflow, DVC, or SageMaker. If you're passionate about backend development and excited to work at the intersection of software engineering and AI innovation , we’d love to hear from you.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Job Summary We are seeking a forward-thinking AI Architect to design, lead, and scale enterprise-grade AI systems and solutions across domains. This role demands deep expertise in machine learning, generative AI, data engineering, cloud-native architecture, and orchestration frameworks. You will collaborate with cross-functional teams to translate business requirements into intelligent, production-ready AI solutions. Key Responsibilities Architecture & Strategy : Design end-to-end AI architectures that include data pipelines, model development, MLOps, and inference serving. Create scalable, reusable, and modular AI components for different use cases (vision, NLP, time series, etc.). Drive architecture decisions across AI solutions, including multi-modal models, LLMs, and agentic workflows. Ensure interoperability of AI systems across cloud (AWS/GCP/Azure), edge, and hybrid environments. Technical Leadership Guide teams in selecting appropriate models (traditional ML, deep learning, transformers, etc.) and technologies. Lead architectural reviews and ensure compliance with security, performance, and governance policies. Mentor engineering and data science teams in best practices for AI/ML, GenAI, and MLOps. Model Lifecycle & Engineering Oversee implementation of model lifecycle using CI/CD for ML (MLOps) and/or LLMOps workflows. Define architecture for Retrieval Augmented Generation (RAG), vector databases, embeddings, prompt engineering, etc. Design pipelines for fine-tuning, evaluation, monitoring, and retraining of models. Data & Infrastructure Collaborate with data engineers to ensure data quality, feature pipelines, and scalable data stores. Architect systems for synthetic data generation, augmentation, and real-time streaming inputs. Define solutions leveraging data lakes, data warehouses, and graph databases. Client Engagement / Product Integration Interface with business/product stakeholders to align AI strategy with KPIs. Collaborate with DevOps teams to integrate models into products via APIs/microservices. Required Skills & Experience Core Skills : Strong foundation in AI/ML/DL (Scikit-learn, TensorFlow, PyTorch, Transformers, Langchain, etc.) Advanced knowledge of Generative AI (LLMs, diffusion models, multimodal models, etc.) Proficiency in cloud-native architectures (AWS/GCP/Azure) and containerization (Docker, Kubernetes) Experience with orchestration frameworks (Airflow, Ray, LangGraph, or similar) Familiarity with vector databases (Weaviate, Pinecone, FAISS), LLMOps platforms, and RAG design Architecture & Programming Solid experience in architectural patterns (microservices, event-driven, serverless) Proficient in Python and optionally Java/Go Knowledge of APIs (REST, GraphQL), streaming (Kafka), and observability tooling (Prometheus, ELK, Grafana) Tools & Platforms ML lifecycle tools: MLflow, Kubeflow, Vertex AI, Sagemaker, Hugging Face, etc. Prompt orchestration tools: LangChain, CrewAI, Semantic Kernel, DSPy (nice to have) Knowledge of security, privacy, and compliance (GDPR, SOC2, HIPAA, etc.) (ref:hirist.tech)

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

kolkata, west bengal

On-site

At EY, youll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And were counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting AI Enabled Automation Developer Staff -Python We are looking to hire people with strong AI Enabled Automation skills and who are interested in learning new technologies in the process automation space Azure . GenAI , large Lang Models(LLM). RAG ,Vector DB , Graph DB ,Python At EY, youll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And were counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Requirements 2 to 3 years of relevant professional experience Expertise in Python programming including experience with Al/machine learning frameworks like TensorFlow, PyTorch, Keras, Langchain, MLflow, Promtflow(Good to have) 1-2 years of working knowledge of NLP and LLMs like BERT, GPT-3/4, T5, etc. Knowledge of how these models work and how to fine-tune them Expertise in prompt engineering principles and techniques like chain of thought, in-context learning, tree of thought, etc. Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Strong analytical and problem-solving skills with the ability to think critically and troubleshoot issues Excellent communication skills, both verbal and written in English What we look for A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Advisory practices globally with leading businesses across a range of industries What working at EY offers At EY, were dedicated to helping our clients, from startups to Fortune 500 companies and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way thats right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 2 weeks ago

Apply

16.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines. Knows & brings in external ML frameworks and libraries. Consistently avoids common pitfalls in model development and deployment. How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customer Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 8+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Budaun Sadar, Uttar Pradesh, India

On-site

MinutestoSeconds is a dynamic organization specializing in outsourcing services, digital marketing, IT recruitment, and custom IT projects. We partner with SMEs, mid-sized companies, and niche professionals to deliver tailored solutions. We would love the opportunity to work with YOU!! Requirements JD: About the Role: We are looking for a highly motivated and innovative AI/ML Engineer to join our growing team. You will play a key role in designing, developing, and deploying machine learning models and AI-driven solutions that solve real-world business problems. This is a hands-on role requiring a deep understanding of ML algorithms, data preprocessing, model optimization, and scalable deployment. Key Responsibilities: Design and implement scalable ML solutions for classification, regression, clustering, and recommendation use cases Collaborate with data scientists, engineers, and product teams to translate business requirements into ML use cases Preprocess large datasets using Python, SQL, and modern ETL tools Train, validate, and optimize machine learning and deep learning models Deploy models using MLOps best practices (CI/CD, model monitoring, versioning) Continuously improve model performance and integrate feedback loops Research and experiment with the latest in AI/ML trends, including GenAI, LLMs, and transformers Document models and solutions for reproducibility and compliance Required Skills: Strong proficiency in Python, with hands-on experience in NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, etc. Solid understanding of supervised and unsupervised learning, NLP, and time-series forecasting Experience with cloud platforms such as AWS, GCP, or Azure (preferred: SageMaker, Vertex AI, or Azure ML Studio) Familiarity with Docker, Kubernetes, and MLOps practices Proficient in writing efficient and production-grade code Excellent problem-solving and critical-thinking skills Good to Have: Experience with LLMs, Generative AI, or OpenAI APIs Exposure to big data frameworks like Spark or Hadoop Knowledge of feature stores, data versioning tools (like DVC or MLflow) Published work, research papers, or contributions to open-source ML projects

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Title: AI/ML Architect / Senior AI/ML Engineer (8+ Years Experience) Location: [Onsite/Remote/Hybrid – Customize as per your need] Employment Type: Full-time 🔍 About the Role: We are seeking a seasoned AI/ML Architect / Senior Engineer with 10+ years of hands-on experience in Artificial Intelligence, Machine Learning, and Data Science. The ideal candidate will have worked across various industries (e.g., healthcare, finance, retail, manufacturing, etc.) and demonstrated a deep understanding of the end-to-end ML lifecycle — from data ingestion to model deployment and monitoring. You’ll play a strategic and technical leadership role in designing and scaling intelligent systems while staying ahead of evolving market trends in AI, ML, and GenAI. 🎯 Key Responsibilities: Architect, design, and implement scalable AI/ML solutions across multiple domains. Translate business problems into technical solutions using data-driven methodologies. Lead model development, deployment, and operationalization using MLOps best practices. Evaluate and incorporate emerging trends such as Generative AI (e.g., LLMs) , AutoML , Federated Learning , and Responsible AI . Mentor and guide junior engineers and data scientists. Collaborate with product managers, data engineers, and stakeholders for end-to-end delivery. Establish best practices in experimentation, model validation, reproducibility, and monitoring. Work with modern data stack and cloud ecosystems (AWS, Azure, GCP). 🧠 Required Skills and Experience: 8+ years of experience in AI/ML, Data Science, or related roles. Proficient in Python, R, SQL, and key libraries (TensorFlow, PyTorch, Scikit-learn, XGBoost, etc.). Strong experience with MLOps tools (MLflow, Kubeflow, SageMaker, Vertex AI, etc.). Expertise in developing, tuning, and deploying ML/DL models in production environments. Experience in NLP, Computer Vision, Time-Series Forecasting, and/or GenAI. Familiar with model explainability (SHAP, LIME), fairness, and bias mitigation techniques. Solid knowledge of cloud-based architectures (Azure, AWS, or GCP). Experience across domains such as fintech, healthcare, e-commerce, logistics, or manufacturing. 🌐 Preferred Qualifications: Master's or Ph.D. in Computer Science, Data Science, Statistics, or a related field. Experience integrating AI with business applications (e.g., ERP, CRM, RPA platforms). Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines. Familiarity with data governance, privacy-preserving AI, and compliance standards (GDPR, HIPAA). 🌟 Why Join Us? Work with cross-functional, forward-thinking teams on impactful projects. Opportunity to lead initiatives in cutting-edge AI and industry 4.0 innovations . Flexible work culture with continuous learning and growth opportunities. Access to the latest tools, cloud infrastructure, and high-compute environments.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Indore, Madhya Pradesh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Chandigarh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Surat, Gujarat, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Mysore, Karnataka, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Vijayawada, Andhra Pradesh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Dehradun, Uttarakhand, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Patna, Bihar, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies