Home
Jobs

6009 Tensorflow Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

4 - 8 Lacs

Noida

Work from Office

We are looking for a skilled Senior Azure Data Engineer with 5 to 10 years of experience to design and implement scalable data pipelines using Azure technologies, driving data transformation, analytics, and machine learning. The ideal candidate will have a strong background in data engineering and proficiency in Python, PySpark, and Spark Pools. Roles and Responsibility Design and implement scalable Databricks data pipelines using PySpark. Transform raw data into actionable insights through data analysis and machine learning. Build, deploy, and maintain machine learning models using MLlib or TensorFlow. Optimize cloud data integration from Azure Blob Storage, Data Lake, and SQL/NoSQL sources. Execute large-scale data processing using Spark Pools and fine-tune configurations for efficiency. Collaborate with cross-functional teams to identify business requirements and develop solutions. Job Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Minimum 5 years of experience in data engineering, with at least 3 years specializing in Azure Databricks, PySpark, and Spark Pools. Proficiency in Python, PySpark, Pandas, NumPy, SciPy, Spark SQL, DataFrames, RDDs, Delta Lake, Databricks Notebooks, and MLflow. Hands-on experience with Azure Data Lake, Blob Storage, Synapse Analytics, and other relevant technologies. Strong understanding of data modeling, data warehousing, and ETL processes. Experience with agile development methodologies and version control systems.

Posted 1 week ago

Apply

7.0 - 10.0 years

7 - 11 Lacs

Noida

Work from Office

We are looking for a skilled Data Scientist with 7 to 10 years of experience to join our team in Gurgaon. The ideal candidate will have expertise in designing, developing, and deploying scalable AI/ML solutions for Big Data. Roles and Responsibility Design and develop scalable AI/ML solutions for Big Data using Python, SQL, TensorFlow, PyTorch, Scikit-learn, and Big Data ML libraries. Deploy AI/ML models and manage Big Data pipelines on cloud-based services such as GCP, AWS, or Azure. Extract actionable insights from large datasets and apply statistical methods to drive business growth. Communicate complex findings effectively to both technical and non-technical audiences through data visualization skills. Collaborate with cross-functional teams to identify business problems and develop data-driven solutions. Develop and maintain large-scale data systems and architectures to support business intelligence and analytics. Job Bachelor's or Master's degree in Computer Science, Data Science, Engineering, Statistics, Mathematics, or a related field. Proven experience with cloud-based Big Data services for AI/ML model deployment and Big Data pipelines. Strong proficiency in programming languages such as Python, SQL, and experience with data modeling, warehousing, and ETL in Big Data contexts. Ability to work with large datasets, apply statistical methods, and communicate complex findings effectively. Experience with data visualization tools and techniques to communicate complex data insights. Strong analytical and problem-solving skills, with the ability to think critically and outside the box.

Posted 1 week ago

Apply

3.0 - 8.0 years

3 - 6 Lacs

Noida

Work from Office

We are looking for a skilled MLOps professional with 3 to 11 years of experience to join our team in Hyderabad. The ideal candidate will have a strong background in Machine Learning, Artificial Intelligence, and Computer Vision. Roles and Responsibility Design, build, and maintain efficient, reusable, and tested code in Python and other applicable languages and library tools. Understand stakeholder needs and convey them to developers. Work on automating and improving development and release processes. Deploy Machine Learning (ML) to large production environments. Drive continuous learning in AI and computer vision. Test and examine code written by others and analyze results. Identify technical problems and develop software updates and fixes. Collaborate with software developers and engineers to ensure development follows established processes and works as intended. Plan out projects and participate in project management decisions. Job Minimum 3 years of hands-on experience with AWS services and products (Batch, SageMaker, StepFunctions, CloudFormation/CDK). Strong Python experience. Minimum 3 years of experience with Machine Learning/AI or Computer Vision development/engineering. Ability to provide technical leadership to developers for designing and securing solutions. Understanding of Linux utilities and Bash. Familiarity with containerization using Docker. Experience with data pipeline frameworks, such as MetaFlow is preferred. Experience with Lambda, SQS, ALB/NLBs, SNS, and S3 is preferred. Practical experience deploying Computer Vision/Machine Learning solutions at scale into production. Exposure to technologies/tools such as Keras, Pandas, TensorFlow, PyTorch, Caffe, NumPy, DVC/CML.

Posted 1 week ago

Apply

10.0 - 15.0 years

20 - 25 Lacs

Noida

Work from Office

We are looking for a skilled OCI Cloud AI Architect with 10 to 15 years of experience in Oracle Cloud and Artificial Intelligence, based in Bengaluru. The ideal candidate should have strong Python programming development experience, streamlit, XML, JSON, and hands-on knowledge of LLMs. Roles and Responsibility Design, architect, and deploy full-stack AI/ML & Gen AI solutions over the OCI AI stack. Develop and implement AI governance, security, guardrails, and responsible AI frameworks. Work on data ingestion, feature engineering, model training, evaluation, deployment, and monitoring. Implement Agentic AI Frameworks such as CrewAI, AutoGen, and multi-agent orchestration workflows. Fine-tune and parameter-efficiently tune models using prompt engineering techniques. Collaborate with cross-functional teams to integrate AI/ML models into existing systems. Job Strong Python programming development experience, including streamlit, XML, and JSON. Deep hands-on knowledge of LLMs (e.g., Cohere, GPT) and prompt engineering techniques (e.g., zero-shot, few-shot, CoT, ReAct). Experience with AI/ML/Gen AI frameworks (e.g., TensorFlow, PyTorch, Hugging Face, LangChain) and Vector DBs (e.g., Pinecone, Milvus). Proficient in implementing Agentic AI Frameworks (e.g., CrewAI, AutoGen, and multi-agent orchestration workflow). Strong understanding and practical application of Prompt engineering, Fine-tuning, and parameter-efficient tuning. Experience with front-end languages such as React, Angular, or JavaScript. Experience with Oracle ATP, 23ai Databases, and vector queries. Must have knowledge of data ingestion, feature engineering, model training, evaluation, deployment, and monitoring. Proficient in AI/ML/Gen AI frameworks (e.g., TensorFlow, PyTorch, Hugging Face, LangChain) and Vector DBs (e.g., Pinecone, Milvus).

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Location: Remote / Client Location Experience: 3+ Years Employment Type: Full-Time (Client Deployment via Hubnex Labs) Role Overview Join a pioneering AI team to design and deploy cutting-edge deep learning solutions for computer vision and audio analysis. You’ll leverage CNNs, Vision Transformers, attention mechanisms, and multi-modal techniques to solve complex real-world challenges in object detection, video processing, and audio classification. Key Responsibilities Design, develop, and optimize deep learning models for image/video analysis (object detection, segmentation) and audio classification tasks. Implement and fine-tune CNN architectures, Vision Transformers (ViT, Swin), and attention mechanisms (SE, CBAM, self/cross-attention). Process multi-modal data: Video: Apply spatiotemporal modeling (3D CNNs, temporal attention) Audio: Extract features (spectrograms, MFCCs) and build classification pipelines Utilize pretrained models (transfer learning) and multi-task learning frameworks. Optimize models for accuracy, speed, and robustness using PyTorch/TensorFlow. Collaborate with MLOps teams to deploy solutions into production. Required Skills Programming: Advanced Python (PyTorch/TensorFlow) Computer Vision: Vision Transformers (ViT, Swin, DeiT) Object detection (YOLO, SSD, Faster R-CNN, DETR) Video analysis (temporal modeling) Audio Processing: Feature extraction (MFCCs, spectrograms) and classification Modeling Expertise: Attention mechanisms (self/cross-attention, SE, CBAM) Transfer learning and fine-tuning Training strategies (LR scheduling, early stopping, data augmentation) Experience handling large-scale datasets and building data pipelines. Preferred Qualifications Exposure to multi-modal learning (combining vision/audio/text) Familiarity with R for statistical analysis Publications or projects in CVPR/NeurIPS/ICML This role is for a client of Hubnex Labs. Selected candidates will represent Hubnex while working directly with the client’s AI team. Skills: data handling,audio processing,video analysis,tensorflow,aiml,attention mechanisms,r,computer vision,vision transformers,object detection,transfer learning,feature extraction,advanced python,pytorch

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

India

Remote

Location: Work from Anywhere Type: Full-Time | Contract-Based | Flexible Experience: 2 - 3 Years Industry: AI, SaaS, Startup Tech About HYI.AI: HYI.AI is a Virtual Assistance and GenAI platform built for startups, entrepreneurs, and tech innovators. We specialize in offering virtual talent solutions, GenAI tools, and custom AI/ML deployments to help founders and businesses scale smarter and faster. Your Role As an AI/ML Engineer, you will be responsible for designing, building, and deploying machine learning models that solve real-world problems for our clients. You’ll work closely with product teams, data engineers, and developers to turn raw data into intelligent systems. Tech Skills We Value Strong knowledge of Python and ML libraries (e.g., Scikit-learn, TensorFlow, PyTorch) Experience with NLP, Computer Vision, or Recommendation Systems Familiarity with data pipelines and tools like Airflow, Spark, or MLFlow Knowledge of cloud platforms (AWS/GCP/Azure) and MLOps practices Experience with SQL and NoSQL databases Bonus: Knowledge of LLMs or fine-tuning transformer models What We’re Looking For Hands-on experience in machine learning or data science Strong understanding of supervised, unsupervised, and deep learning algorithms Ability to deploy models into production and monitor performance Excellent problem-solving and communication skills Proficiency in English Portfolio, GitHub, or project showcase is highly encouraged What You’ll Get Work with Global clients Flexible engagement: Freelance or full-time contract Access to a growing network of ML professionals via HYI.AI Support for personal branding, thought leadership, and project visibility

Posted 1 week ago

Apply

6.0 - 11.0 years

25 - 30 Lacs

Pune, Chennai, Bengaluru

Hybrid

Role & responsibilities Preferred candidate profile Job Title: Senior Python Developer AI/ML Location: [On-site/Remote/Hybrid Location] Experience Required: 6+ years Employment Type: [Full-time Job Summary: We are seeking a highly skilled and experienced Senior Python Developer with strong AI/ML expertise to join our dynamic team. The ideal candidate will play a critical role in designing, developing, and deploying intelligent applications and services. You will collaborate closely with data scientists, ML engineers, and product teams to deliver scalable, high-performance AI solutions. Key Responsibilities: Design, develop, and maintain robust and scalable Python applications with integrated AI/ML components. Build and optimize machine learning models for classification, regression, NLP, computer vision, or recommendation systems. Work with large datasets, perform data wrangling, and implement data pipelines using tools such as Pandas, PySpark, or Apache Airflow. Integrate ML models into production-grade applications and APIs (using Flask, FastAPI, etc.). Collaborate with cross-functional teams including Data Engineering, DevOps, and Product Management to define system architecture and implementation strategies. Participate in code reviews, mentor junior developers, and contribute to best coding practices and documentation. Monitor and improve performance of deployed models and applications. Required Skills and Qualifications: Bachelor's/Master’s degree in Computer Science, Data Science, Engineering, or related field. 6+ years of professional experience in Python development. Strong knowledge of machine learning algorithms, statistical modeling, and data science techniques. Experience with ML frameworks and libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost . Proficient in data processing and analysis tools like NumPy, Pandas, SQL, Spark . Hands-on experience deploying ML models in production environments. Experience with REST APIs and microservice architecture. Familiarity with containerization tools like Docker and orchestration tools like Kubernetes is a plus. Version control (Git), CI/CD pipelines, and cloud platforms (AWS, GCP, Azure) experience preferred. Excellent problem-solving skills, analytical thinking, and attention to detail.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Designation: AI/ML Engineer Location: Gurugram Experience: 3+ years Budget: Upto 35 LPA Industry: AI Product Role and Responsibilities: Model Development: Design, train, test, and deploy machine learning models using frameworks like Pytorch and TensorFlow, specifically for virtual try-on applications with a focus on draping and fabric simulation. Task-Specific Modeling: Build models for tasks such as Natural Language Processing (NLP), Speech-to-Text (STT), and Text-to-Speech (TTS) that integrate seamlessly with computer vision applications in the virtual try-on domain. Image Processing: Implement advanced image processing techniques including enhancement, compression, restoration, filtering, and manipulation to improve the accuracy and realism of draping in virtual try-on systems. Feature Extraction & Segmentation: Apply feature extraction methods, image segmentation techniques, and draping algorithms to create accurate and realistic representations of garments on virtual models. Machine Learning Pipelines: Develop and maintain ML pipelines for data ingestion, processing, and transformation to support large-scale deployments of virtual try-on solutions. Deep Learning & Draping: Build and train convolutional neural networks (CNNs) for image recognition, fabric draping, and texture mapping tasks crucial to the virtual try-on experience. AI Fundamentals: Leverage a deep understanding of AI fundamentals, including machine learning, computer vision, draping algorithms, and generative AI (Gen AI) techniques to drive innovation in virtual try-on technology. Programming: Proficiently code in Python and work with other programming languages like Java, C++, or R as required. Cloud Integration: Utilize cloud-based AI platforms such as AWS, Azure, or Google Cloud to deploy and scale virtual try-on solutions, with a focus on real-time processing and rendering. Data Analysis: Perform data analysis and engineering to optimize the performance and accuracy of AI models, particularly in the context of fabric draping and garment fitting. Continuous Learning: Stay informed about the latest trends and developments in machine learning, deep learning, computer vision, draping technologies, and generative AI (Gen AI), applying them to virtual try-on projects. Skills Required: Experience: Minimum of 5 years in Computer Vision Engineering or a similar role, with a focus on virtual try-on, draping, or related applications. Programming: Strong programming skills in Python, with extensive experience in Pytorch and TensorFlow. Draping & Fabric Simulation: Hands-on experience with draping algorithms, fabric simulation, and texture mapping techniques. Data Handling: Expertise in data pre-processing, feature engineering, and data analysis to support high-quality model development, especially for draping and virtual garment fitting. Deep Neural Networks & Gen AI: Extensive experience in working with Deep Neural Networks, Generative Adversarial Networks (GANs), Conditional GANs, Transformers, and other generative AI techniques relevant to virtual try-on and draping. Advanced Techniques: Proficiency with cutting-edge techniques like Stable Diffusion, Latent Diffusion, InPainting, Text-to-Image, Image-to-Image models, and their application in computer vision and virtual try-on technology. Algorithm Knowledge: Strong understanding of machine learning algorithms and techniques, including deep learning, supervised and unsupervised learning, reinforcement learning, natural language processing, and generative AI.

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Noida

Work from Office

company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=6 to 12 , jd= Role Java + Python + AI Developer Experience: 6+ Years Location: (Remote/Hybrid opportunities available) Employment Type: Sub-Con Key Responsibilities Design, develop, and maintain enterprise-level applications using Java and Python. Integrate AI/ML models into production-grade systems. Collaborate with cross-functional teams to understand business needs and translate them into technical solutions. Optimize application performance and scalability. Participate in code reviews, design discussions, and contribute to a culture of technical excellence. 6+ years of experience in backend development with Java and Python. Hands-on experience with AI/ML libraries and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Solid understanding of OOPs, REST APIs, and microservices architecture. Experience with cloud platforms like AWS, Azure, or GCP is a plus. Strong problem-solving skills and ability to work independently or in a team. Excellent communication and collaboration skills. Nice to Have Exposure to NLP, Computer Vision, or data analytics. Experience in deploying ML models in production. Familiarity with containerization tools (Docker, Kubernetes). , Title=Java + Python + AI Developer, ref=6566505

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Summary Job Title: Senior Software Development Engineer Artificial Intelligence & MLOps Location: Any Location Years of Experience: 4-6 Years Job Summary: We are seeking a highly skilled and experienced Senior Software Development Engineer specializing in Artificial Intelligence and MLOps. The ideal candidate will have a strong background in machine learning operations and software development, with a proven track record of deploying and managing machine learning models in production environments. This role requires a deep understanding of MLOps principles and practices, as well as the ability to collaborate effectively with cross functional teams to deliver high quality AI solutions. Responsibilities: Design, develop, and implement scalable machine learning models and algorithms. Manage the end to end lifecycle of machine learning models, including deployment, monitoring, and maintenance. Collaborate with data scientists, software engineers, and product managers to define and deliver AI solutions that meet business needs. Establish and maintain MLOps best practices, including version control, CI/CD pipelines, and automated testing. Optimize model performance and ensure reliability in production environments. Conduct code reviews and provide mentorship to junior team members. Stay updated with the latest trends and advancements in AI and MLOps technologies. Mandatory Skills: Strong expertise in MLOps practices and tools (e.g., MLflow, Kubeflow, TFX). Proficiency in programming languages such as Python, Java, or Scala. Experience with cloud platforms GCP is preferred (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes). Solid understanding of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit learn). Experience with data engineering and ETL processes. Strong problem solving skills and ability to work in a fast paced environment. Preferred Skills: Familiarity with data visualization tools (e.g., Tableau, Power BI). Knowledge of big data technologies (e.g., Hadoop, Spark). Experience with Agile methodologies and project management tools. Understanding of DevOps practices and tools. Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 7 10 years of experience in software development with a focus on AI and MLOps. Proven track record of successfully deploying machine learning models in production. Excellent communication and collaboration skills. If you are passionate about artificial intelligence and have a strong foundation in MLOps, we encourage you to apply and join our innovative team at TechM.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Operations Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI / ML Engineer, you will develop applications and systems that leverage artificial intelligence tools and cloud AI services. Your typical day will involve designing and implementing production-ready application pipelines, ensuring high-quality standards. You will also explore the integration of generative AI models into solutions, while working on various aspects of deep learning, neural networks, chatbots, and image processing to enhance functionality and performance. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate workshops and meetings to gather requirements and feedback from stakeholders. - Develop and maintain documentation related to integration processes and solutions. - Automate complex tasks and workflows across infrastructure management, data processing, application deployments, and IT operations. - Identify process improvement opportunities and implement optimizations. - Utilize RPA solutions where applicable. - Design and implement efficient data pipelines for machine learning models. - Leverage AI/ML techniques (NLP, ML algorithms, data analysis) to create intelligent automation solutions - Perform regular maintenance (patching, upgrades, configuration changes) of security solutions. - Build and implement on-premises and cloud-based security solutions using SaaS, IaaS, and orchestration tools. - Implement robust monitoring and logging solutions. - Report on security status, incidents, and improvements to management. - Develop and maintain APIs and microservices for automation workflows. - Stay abreast of cybersecurity trends, vulnerabilities, and attack vectors. - Proactively propose enhancements to security controls. - Provide exceptional support to internal and external users. - Conduct regular security assessments and vulnerability scans. Professional & Technical Skills: - Must Have Skills: Proficiency in Artificial Intelligence & Machine Learning Operations, Python and PowerShell, including experience with automation scripts and frameworks. - Solid understanding of AI/ML concepts and experience with relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn). - Proficiency in data manipulation and analysis using Pandas and NumPy. - Knowledge of version control systems (e.g., Git) and CI/CD pipelines. - Hands-on experience with orchestration tools (e.g., Ansible, Puppet). - Strong understanding of integration frameworks and methodologies. - Experience with cloud-based solutions and deployment strategies. - Familiarity with data management and data governance practices. Additional Information: - The candidate should have minimum 5 years of experience in Machine Learning Operations. - This position is based at our Hyderabad office. - A 15 years full time education is required. 15 years full time education

Posted 1 week ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Faridabad

Work from Office

Key Responsibilities. Predictive Modeling & Deep Learning:. Develop ML/DL models for predicting match scores & outcomes, team & player performances, and statistics. Implement time-series forecasting models (LSTMs, Transformers, ARIMA, etc.) for score predictions. Train and fine-tune reinforcement learning models for strategic cricket decision-making. Develop ensemble learning techniques to improve predictive accuracy. In-Depth Cricket Analytics:. Design models to analyze player form, team strengths, matchups, and opposition weaknesses. Build player impact and performance forecasting models based on pitch conditions, opposition, and recent form. Extract insights from match footage and live tracking data using deep learning-based video analytics. Data Processing & Engineering:. Collect, clean, and preprocess structured and unstructured cricket datasets from APIs, scorecards, and video feeds. Build data pipelines (ETL) for real-time and historical data ingestion. Work with large-scale datasets using big data tools (Spark, Hadoop, Dask, etc.). Model Deployment & MLOps:. Deploy ML/DL models into production environments (AWS, GCP, Azure etc). Develop APIs to serve predictive models for real-time applications. Implement CI/CD pipelines, model monitoring, and retraining workflows for continuous improvement. Performance Metrics & Model Explainability:. Define and optimize evaluation metrics (MAE, RMSE, ROC-AUC, etc.) for model performance tracking. Implement explainable AI techniques to improve model transparency. Continuously update models with new match data, player form & injuries, and team form changes. About CompanyAt Lifease Solutions LLP, we believe that design and technology are the perfect blend to solve any problem and bring any idea to life. Lifease Solutions is a leading provider of software solutions and services that help businesses succeed. Based in Noida, India, we are committed to delivering high-quality, innovative solutions that drive value and growth for our customers. Our expertise in the finance, sports, and capital market domains has made us a trusted partner for companies around the globe. We take pride in our ability to turn small projects into big successes, and we are always looking for ways to help our clients maximize their IT investments

Posted 1 week ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Gurugram

Work from Office

Key Responsibilities. Predictive Modeling & Deep Learning:. Develop ML/DL models for predicting match scores & outcomes, team & player performances, and statistics. Implement time-series forecasting models (LSTMs, Transformers, ARIMA, etc.) for score predictions. Train and fine-tune reinforcement learning models for strategic cricket decision-making. Develop ensemble learning techniques to improve predictive accuracy. In-Depth Cricket Analytics:. Design models to analyze player form, team strengths, matchups, and opposition weaknesses. Build player impact and performance forecasting models based on pitch conditions, opposition, and recent form. Extract insights from match footage and live tracking data using deep learning-based video analytics. Data Processing & Engineering:. Collect, clean, and preprocess structured and unstructured cricket datasets from APIs, scorecards, and video feeds. Build data pipelines (ETL) for real-time and historical data ingestion. Work with large-scale datasets using big data tools (Spark, Hadoop, Dask, etc.). Model Deployment & MLOps:. Deploy ML/DL models into production environments (AWS, GCP, Azure etc). Develop APIs to serve predictive models for real-time applications. Implement CI/CD pipelines, model monitoring, and retraining workflows for continuous improvement. Performance Metrics & Model Explainability:. Define and optimize evaluation metrics (MAE, RMSE, ROC-AUC, etc.) for model performance tracking. Implement explainable AI techniques to improve model transparency. Continuously update models with new match data, player form & injuries, and team form changes. About CompanyAt Lifease Solutions LLP, we believe that design and technology are the perfect blend to solve any problem and bring any idea to life. Lifease Solutions is a leading provider of software solutions and services that help businesses succeed. Based in Noida, India, we are committed to delivering high-quality, innovative solutions that drive value and growth for our customers. Our expertise in the finance, sports, and capital market domains has made us a trusted partner for companies around the globe. We take pride in our ability to turn small projects into big successes, and we are always looking for ways to help our clients maximize their IT investments

Posted 1 week ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Ghaziabad

Work from Office

Key Responsibilities. Predictive Modeling & Deep Learning:. Develop ML/DL models for predicting match scores & outcomes, team & player performances, and statistics. Implement time-series forecasting models (LSTMs, Transformers, ARIMA, etc.) for score predictions. Train and fine-tune reinforcement learning models for strategic cricket decision-making. Develop ensemble learning techniques to improve predictive accuracy. In-Depth Cricket Analytics:. Design models to analyze player form, team strengths, matchups, and opposition weaknesses. Build player impact and performance forecasting models based on pitch conditions, opposition, and recent form. Extract insights from match footage and live tracking data using deep learning-based video analytics. Data Processing & Engineering:. Collect, clean, and preprocess structured and unstructured cricket datasets from APIs, scorecards, and video feeds. Build data pipelines (ETL) for real-time and historical data ingestion. Work with large-scale datasets using big data tools (Spark, Hadoop, Dask, etc.). Model Deployment & MLOps:. Deploy ML/DL models into production environments (AWS, GCP, Azure etc). Develop APIs to serve predictive models for real-time applications. Implement CI/CD pipelines, model monitoring, and retraining workflows for continuous improvement. Performance Metrics & Model Explainability:. Define and optimize evaluation metrics (MAE, RMSE, ROC-AUC, etc.) for model performance tracking. Implement explainable AI techniques to improve model transparency. Continuously update models with new match data, player form & injuries, and team form changes. About CompanyAt Lifease Solutions LLP, we believe that design and technology are the perfect blend to solve any problem and bring any idea to life. Lifease Solutions is a leading provider of software solutions and services that help businesses succeed. Based in Noida, India, we are committed to delivering high-quality, innovative solutions that drive value and growth for our customers. Our expertise in the finance, sports, and capital market domains has made us a trusted partner for companies around the globe. We take pride in our ability to turn small projects into big successes, and we are always looking for ways to help our clients maximize their IT investments

Posted 1 week ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Noida

Work from Office

Key Responsibilities. Predictive Modeling & Deep Learning:. Develop ML/DL models for predicting match scores & outcomes, team & player performances, and statistics. Implement time-series forecasting models (LSTMs, Transformers, ARIMA, etc.) for score predictions. Train and fine-tune reinforcement learning models for strategic cricket decision-making. Develop ensemble learning techniques to improve predictive accuracy. In-Depth Cricket Analytics:. Design models to analyze player form, team strengths, matchups, and opposition weaknesses. Build player impact and performance forecasting models based on pitch conditions, opposition, and recent form. Extract insights from match footage and live tracking data using deep learning-based video analytics. Data Processing & Engineering:. Collect, clean, and preprocess structured and unstructured cricket datasets from APIs, scorecards, and video feeds. Build data pipelines (ETL) for real-time and historical data ingestion. Work with large-scale datasets using big data tools (Spark, Hadoop, Dask, etc.). Model Deployment & MLOps:. Deploy ML/DL models into production environments (AWS, GCP, Azure etc). Develop APIs to serve predictive models for real-time applications. Implement CI/CD pipelines, model monitoring, and retraining workflows for continuous improvement. Performance Metrics & Model Explainability:. Define and optimize evaluation metrics (MAE, RMSE, ROC-AUC, etc.) for model performance tracking. Implement explainable AI techniques to improve model transparency. Continuously update models with new match data, player form & injuries, and team form changes. About CompanyAt Lifease Solutions LLP, we believe that design and technology are the perfect blend to solve any problem and bring any idea to life. Lifease Solutions is a leading provider of software solutions and services that help businesses succeed. Based in Noida, India, we are committed to delivering high-quality, innovative solutions that drive value and growth for our customers. Our expertise in the finance, sports, and capital market domains has made us a trusted partner for companies around the globe. We take pride in our ability to turn small projects into big successes, and we are always looking for ways to help our clients maximize their IT investments

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Data Scientist – Agentic AI Company: Pranathi Software Services Pvt. Ltd. Location: Hyderabad, India Job Type: Full-Time | Onsite/Hybrid Experience Level: Ph.D. with 0–2+ Years in AI Research or Applied ML About Pranathi Software Services Pvt. Ltd.: Pranathi Software is a fast-growing IT solutions company focused on innovation, emerging technologies, and building scalable solutions for global clients. We are now expanding into next-generation AI systems and are looking for exceptional talent to lead our research in Agentic Artificial Intelligence. Required Qualifications: Ph.D. in Computer Science, Artificial Intelligence, Cognitive Systems, or related fields. Deep understanding of Agentic AI, Reinforcement Learning, Multi-Agent Systems, or LLM-based Agents. Experience with frameworks such as LangChain, AutoGPT, OpenAI Gym, or Ray. Strong coding skills in Python and experience with PyTorch, TensorFlow, or JAX. Proven track record of academic publications and research impact. Interested can share your resume at giribabu@pranathiss.com

Posted 1 week ago

Apply

4.0 - 5.0 years

25 - 30 Lacs

Indore, Surat, Mumbai (All Areas)

Work from Office

Job Title: Data Scientist Location: Mumbai/Indore/Surat Job Type: Full-time About Us: Everestek is a forward-thinking organization specializing in designing and deploying cutting-edge AI solutions. From powerful recommendation engines and intuitive chatbots to state-of-the-art generative AI and deep learning applications, we empower businesses to harness the full potential of their data. Our mission is to create transformative, data-driven products that enable our clients to innovate faster, personalize their offerings, and stay ahead in an ever-evolving tech landscape. At Everestek, we foster a culture of collaboration, creativity, and continuous learning. Our dynamic team of data scientists, engineers, and innovators is dedicated to pushing the boundaries of what's possible in artificial intelligence. If youre passionate about solving complex problems and want to be part of an organization that values experimentation and impact, Everestek is the place for you. Key Responsibilities: Design, build, and optimize machine/deep learning models, including predictive models, recommendation systems, and Gen-AI-based solutions. Develop and implement advanced AI agents capable of performing autonomous tasks, decision-making, and executing requirement-specific workflows. Prompt engineer to develop new and enhance existing Gen-AI applications. (Chatbots, RAG) Perform advanced data manipulation, cleansing, and analysis to extract actionable insights from structured and unstructured data. Create scalable and efficient recommendation systems that enhance user personalization and engagement. Design and deploy AI-driven chatbots and virtual assistants, focusing on natural language understanding and contextual relevance. Implement and optimize machine and deep learning models for NLP tasks, including text classification, sentiment analysis, and language generation. Explore, develop, and deploy state-of-the-art technologies for AI agents, integrating them with broader enterprise systems. Collaborate with cross-functional teams to gather business requirements and deliver AI-driven solutions tailored to specific use cases. Automate workflows using advanced AI tools and frameworks to increase efficiency and reduce manual interventions. Stay informed about cutting-edge advancements in AI, machine learning, NLP, and Gen AI applications, and assess their relevance to the organization. Effectively communicate technical solutions and findings to both technical and non-technical stakeholders. Qualifications and Skills: Required: At least 3 years of experience working with data sciences. Python proficiency and hands-on experience with libraries like (Pandas, Numpy, Matplotlib, NLTK, Sklearn, and Tensorflow) Proven experience in designing and implementing AI agents and autonomous systems. Strong expertise in machine learning, including predictive modeling and recommendation systems. Hands-on experience with deep learning frameworks like TensorFlow or PyTorch, focusing on NLP and AI-driven solutions. Advanced understanding of natural language processing (NLP) techniques and tools, including transformers like BERT, GPT, or similar models including open-source LLMs. Experience in prompt engineering for AI models to enhance functionality and adaptability. Strong knowledge of cloud platforms (AWS) for deploying and scaling AI models. Familiarity with AI agent frameworks like LangChain, OpenAI APIs, or other agent-building tools. Advanced skills in Relational databases [Postgres], Vector Database, querying, analytics, semantic search, and data manipulation. Strong problem-solving and critical-thinking skills, with the ability to handle complex technical challenges. Hands-on experience working with API frameworks like Flask, FastAPI, etc. Git and GitHub proficiency. Excellent communication & documentation skills to articulate complex ideas to diverse audiences. Preferred: Hands-on experience building and deploying conversational AI, chatbots, and virtual assistants. Familiarity with MLOps pipelines and CI/CD for AI/ML workflows. Experience with reinforcement learning or multi-agent systems. BE (OR Master) in Computer Science, Data Science, or Artificial Intelligence. Agentic AI systems using frameworks like Langchain or similar.

Posted 1 week ago

Apply

3.0 - 9.0 years

11 - 16 Lacs

Mumbai

Work from Office

Quant Developers / Quant Risk Axioma/ Barra Manager/ Associate Manager Afternoon Shift Office Location Mumbai/ Pune/ Gurgaon/ Hyderabad Job Requirement: We are mainly looking for quant developers with agile/scrum project management experience with strong Python as well as SQL and KDB Databricks a plus, Fabric a plus, PySpark/Tensorflow a plus, C/C++ a plus Any other data management tools a plus ie Arctic/Mongo/Dashboarding Hedge fund experience a plus Key factors (but not 100% required) are factor, Barra, Axioma Another optional separate key word is KDB or KDB+

Posted 1 week ago

Apply

10.0 years

0 Lacs

India

On-site

Exp- 10yrs Client persistent: Kindly share few profiles for Python + AI/ML Requirements. Belos is the JD for reference: Overall Experience: 10 years onwards Relevant Experience: 5 years onwards Must have skills (also should have skills) – Should have experience in leading a team Data Analysis and Modelling: Perform exploratory data analysis (EDA) to uncover insights and inform model development. Develop, validate, and deploy machine learning models using Python and relevant libraries (e.g., scikit-learn, TensorFlow, PyTorch). Implement statistical analysis and hypothesis testing to drive data-driven decision-making. Data Engineering: Design, build, and maintain scalable data pipelines to process and transform large datasets. Collaborate with data engineers to ensure data quality and system reliability. Optimize data storage solutions for efficient querying and analysis. Software Development: Write clean, maintainable, and efficient code in Python. Develop APIs and integrate machine learning models into production systems. Implement best practices for version control, testing, and continuous integration. Good to have skills - Gene AI: Utilize Gene AI tools and frameworks to enhance data analysis and model development. Integrate Gene AI solutions into existing workflows and systems. Stay updated with the latest advancements in Gene AI and apply them to relevant projects.

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 16 Lacs

Chennai, Coimbatore, Bengaluru

Work from Office

Role & responsibilities We are seeking an enthusiastic and highly skilled Senior AI ML Developer to join our team and work on exciting AI initiatives. The ideal candidate should have worked in CRM Domain like Zoho CRM/ Fresh works and have expert knowledge in AI/ML models, LLMS, Vector database and RAG techniques to automate workflows, create conversational analytics assistants and create similarity models by clustering/ embeddings. 1. Develop CRM Agentic AI Design and implement AI agents using frameworks like Lang Chain, AutoGPT, or ReAct. Automate CRM processes by enabling agents to handle emails, generate follow-ups, update pipelines, and classify tickets. Integrate models for sentiment analysis, intent recognition, entity extraction, and summarization. Connect agents to external APIs, calendars, and CRM tools (e.g., Salesforce, HubSpot). Manage agent memory and context via vector databases like FAISS or Pinecone. 2. Conversational KPI Dashboard Assistant Build a chatbot interface that allows users to query CRM KPIs via natural language. Use Python, LangChain, and OpenAI APIs to parse queries and trigger backend data retrieval tools. Render results in both dashboard format (e.g., Recharts/Chart.js) and tabular views. Support queries like "Whats the total expected revenue this quarter?" or "Show me the lead-to-win ratio by region." 3. Similarity and Recommendation Engines Extract features from existing AI/ML libraries. Build and manage an embedding database for semantic similarity search. Implement clustering algorithms (e.g., K-Means) to group songs based on genre. Develop a scoring system to compare new tracks to a database of existing songs. Required Experience and Qualifications: 4+ years of experience in AI/ML Model development. Should have expert knowledge in CRM Domain including workflows, Forms, Dashboards/reports and data import/export as done. Proficient in Python, with strong experience in TensorFlow, PyTorch, and Scikit-learn . Working knowledge of LangChain, AutoGPT and LLM integration. Expertise in developing Agents for CRM, Dashboards in Leading organisation. Expertise in implementing Agentic AI solutions using LLM models, RAG and Vector database. Proven experience in vector similarity, embedding models and clustering techniques. Passionate about AI innovation and solving real-world challenges with automation and intelligence Proven experience in designing platform architecture and managing API integrations. Strong background in microservices architecture and REST API design. Demonstrated experience with scalability and performance optimization for cloud-based platforms. Experience with Agile methodologies and leading cross-functional teams in a fast-paced environment. Excellent communication and presentation skills for delivering product demos and gathering customer feedback. Proven ability to manage a product roadmap and deliver on tight deadlines . Experience with data security , compliance, and industry best practices. A bachelors degree in computer science, engineering, or a related field.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description In This Role, Your Responsibilities Will Be: Collaborate with cross-functional teams to identify opportunities to apply data-driven insights and develop innovative solutions to complex business problems. Develop, implement, and maintain SQL data pipelines and ETL processes to collect, clean, and curate large and diverse datasets from various sources. Design, build, and deploy predictive and prescriptive machine learning models, generative AI model prompt engineering to help the organization make better data-driven decisions. Perform exploratory data analysis, feature engineering, and data visualization to gain insights and identify potential areas for improvement. Optimize machine learning models and algorithms to ensure scalability, accuracy, and performance while minimizing computational costs. Continuously monitor and evaluate the performance of deployed models, updating or refining them as needed. Stay abreast of the latest developments in data science, machine learning, and big data technologies to drive innovation and maintain a competitive advantage. Develop and implement best practices in data management, data modeling, code, and data quality assurance. Communicate effectively with team members, stakeholders, and senior management to translate data insights into actionable strategies and recommendations. Who You are: You take initiatives and doesn’t wait for instructions and proactively seek opportunities to contribute. You adapt quickly to new situations and apply knowledge optimally. Clearly convey ideas and actively listen to others to complete assigned task as planned For This Role, You Will Need: Bachelor’s degree in computer science, Data Science, Statistics, or a related field or a master's degree or higher is preferred. 3-5 years of experience with popular data science libraries and frameworks such as scikit-learn, SQL, SciPy, TensorFlow, PyTorch, NumPy and Pandas. Minimum 2 years of experience in data science projects leveraging machine learning, deep learning, transformer based large language models or any of other cutting edge AI technologies. Strong programming skills in Python is a must. Solid understanding of calculus, linear algebra, probability, machine learning algorithms, Transformer architecture-based model and data modeling techniques. Proficiency in data visualization tools, such as Matplotlib or Seaborn or Bokeh or Dash. Strong problem-solving and analytical skills with an ability to synthesize complex data sets into actionable insights. Excellent written and verbal communication skills, with the ability to present technical concepts to non-technical audiences. Preferred Qualifications that Set You Apart: Possession of relevant certification/s in data science from reputed universities specializing in AI. Prior experience in engineering would be nice to have. Our Culture & Commitment to You At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives—because we know that great ideas come from great teams. Our commitment to ongoing career development and growing an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams, working together are key to driving growth and delivering business results. We recognize the importance of employee wellbeing. We prioritize providing competitive benefits plans, a variety of medical insurance plans, Employee Assistance Program, employee resource groups, recognition, and much more. Our culture offers flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave.

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Pune

Work from Office

Key Responsibilities: Design, develop, test, and deploy scalable web applications using Python and related technologies. Build responsive and interactive user interfaces using HTML, CSS, and JavaScript frameworks. Develop and maintain automation scripts using Selenium for testing and data extraction. Integrate machine learning models into production environments. Collaborate with Stakeholders, and other developers to deliver high-quality products. Write clean, maintainable, and efficient code following best practices. Troubleshoot, debug, and upgrade existing systems. Participate in code reviews and contribute to team knowledge sharing. Required Skills & Qualifications: 2+ years of professional experience in Python full stack development. Proficiency in Python and frameworks such as Django or Flask. Strong front-end development skills with HTML, CSS, Node.JS and JavaScript (React.js or Vue.js is a plus). Experience with Selenium for automation and testing. Familiarity with Machine Learning concepts and libraries (e.g., scikit-learn, TensorFlow, or PyTorch). Experience with RESTful APIs and third-party integrations. Knowledge of version control systems like Git. Understanding of database systems (SQL and NoSQL). Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Qualifications: Experience with cloud platforms (AWS, Azure, or GCP). Familiarity with containerization tools like Docker. Exposure to CI/CD pipelines and DevOps practices. Knowledge of Agile/Scrum methodologies. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. Minimum of 2+ years of experience as a Python Developer. Relevant certifications in Python development or related technologies are a plus. Additional Information Work from Office only Shift Timings: 4:00 pm - 1:00 am OR 6:00 pm - 3:00 am .

Posted 1 week ago

Apply

12.0 - 17.0 years

45 - 50 Lacs

Noida

Work from Office

Key duties & responsibilities Primary responsibilities include: Manage the architectural and technical direction In line with the Business goals on existing as well as new Automation development. Provide technical guidance and architecture support for Automations, Working with IT leadership to understand enterprise technical strategy and help in providing application roadmaps. Enable teams to implement roadmap or in performing modernization activities. Enable leadership and development teams to manage Technical obsolescence / Tech Debt Making sure applications are meeting architecture standards. Estimate, design and develop scalable RPA solution in highly collaborative agile environment. Creation of Intelligent Automation framework including reusable components and ML Model. Understand the business lines aims & aspirations, and define the scope of automation, conduct pre-analyses of the technical feasibility & estimate development time enabling smooth testing and rollout. Identify & implement the right, advance data and analytics tools, Visualization and decision tools like Python\R for real time analytics. Solutioning the business needs/requirements, participate/drive discussions with business for the documentation of the requirements and solution which can be taken by delivery. Coaching, mentoring and developing pool of Staff Engineers and engineers within the organization and ensuring alignment of pool to company-wide vision and strategies. Serves as the point of escalation for Automation Architecture/Design concerns and technical challenges. Qualification Graduate (B.Tech/BE/MCA/BCA/MSc/BSc-Computer Science) or equivalent academic qualification Master Certification Automation Anywhere Experience, Skills and Knowledge Must have Bachelors degree in computer engineering, or similar field. 12+ years hands-on experience on the following of which at least 5 years in leading RPA, Intelligent Automation and managing teams. Should be from Development Background Must have experienced in RPA tool Automation Anywhere and Intelligent Automation with Python and R Expertise in OCR and related technology. Experience in Java, Tensor Flow, Pytorch, Keras, AbbY, HuggingFace, ELK, Splunk RPA frameworks and automation reporting tools Modern engineering practices such as Behavior Driven Development (BDD), Test-Driven Development (TDD), using the Microsoft .NET stack Full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations. Software development background and scripting with one or more of the following languages: .Net, Java, PowerShell, Python Excellent project management skills. Self-motivated, goal-oriented and persistent; very organized and comfortable producing and managing the details. Experience working with globally distributed teams in a multinational company Familiarity with software architecture principles including open source, object-oriented programming, SaaS, service-oriented architecture, microservices and design patterns; and how architecture impacts team organization and performance. Nice to have Good to have experienced in different RPA and BPM tools like Work Fusion, UiPath and Blue Prism etc. Experience in AI/ML. Knowledge of the healthcare revenue cycle, EMRs, practice management systems, FHIR, HL7 and HIPAA is a major plus. Awareness of DevOps, Agile, Kanban, Scaled Agile Framework (SAFe). Key competency profile Ability to work with cross-functional teams Ability to juggle multiple projects simultaneously Excellent leadership skills Excellent written and verbal communication High attention to detail Experience in customer service Key success criteria Provide automation architectureand detailed technical design documents Alternate design solutions anddecision points by highlighting pros & cons Identify and develop prototypes and leadPoCsfor Bot Architecture that assist in design decisions Provide detailedbuildestimatesto deliver architecture in scope Identify dependencies across various new & existing components/Technology along withtechnicalrisks associated Produce solution architecture document that detail out the architecture blueprint, risks, assumptions andtechnical dependencies On timeand continuousdelivery of Bot Framework and Architecture related components with high quality research and innovative approach following the modern RPA practices and processes. Team Player, ability to direct individual accomplishments toward organizational objectives

Posted 1 week ago

Apply

2.0 - 5.0 years

11 - 15 Lacs

Mumbai

Work from Office

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Pune, Maharashtra, India; Mumbai, Maharashtra, India; Hyderabad, Telangana, India; Bengaluru, Karnataka, India Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, a related field, or equivalent practical experience 10 years of experience in Big Data, Data Warehouse, Data Modelling, Data Mining and Hadoop Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow Experience in GCP Preferred qualifications: Experience in Big Data, information retrieval, data mining, or Machine Learning Experience with IaC and CICD tools like Terraform, Ansible, Jenkins etc Experience architecting, developing software, or Big Data solutions in virtualized environments Experience with encryption techniques like symmetric, asymmetric, HSMs, and envelop Ability to implement secure key storage using Key Management System About the jobThe Google Cloud Platform team helps customers transform and build what's next for their business ? all with technology built in the cloud Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware Our teams are dedicated to helping our customers ? developers, small and large businesses, educational institutions and government agencies ? see the benefits of our technology come to life As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners As a Data and Analytics Consultant, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges You will have an understanding of data governance and security controls, and will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers You will work with Product Management and Product Engineering teams to build and drive excellence in products Google Cloud accelerates every organizations ability to digitally transform its business and industry We deliver enterprise-grade solutions that leverage Googles cutting-edge technology, and tools that help developers build more sustainably Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems Responsibilities Interact with stakeholders to translate customer requirements into recommendations for appropriate solution architectures and advisory services Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP) Help Google cloud customers with current infrastructure assessment, design and architect goal infrastructure, develop a migration plan, and deliver technical workshops to educate them on GCP Participate in technical and design discussions with technical teams to speed up the adoption process and ensure best practices during implementation Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data Google is proud to be an equal opportunity workplace and is an affirmative action employer We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status We also consider qualified applicants regardless of criminal histories, consistent with legal requirements See also Google's EEO Policy and EEO is the Law If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form

Posted 1 week ago

Apply

3.0 - 6.0 years

15 - 19 Lacs

Hyderabad

Work from Office

Minimum qualifications: Bachelors degree or equivalent practical experience 5 years of experience with software development in one or more programming languages, and with data structures/algorithms 3 years of experience testing, maintaining, or launching software products 1 year of experience with software design and architecture 1 year of experience in generative AI and machine learning 1 year of experience implementing core AI/ML concepts Preferred qualifications: Master's degree or PhD in Computer Science, or a related technical field 1 year of experience in a technical leadership role Experience with Python, Notebooks, ML Frameworks (e g , Tensorflow) Experience in large-scale data systems About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another Our products need to handle information at massive scale, and extend well beyond web search We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day As a software engineer, you will work on a specific project critical to Googles needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward In this role, you will be responsible for designing and developing next-generation software systems at the intersection of data analytics (data warehousing, business intelligence, spark, dataflow, data catalog, and more) and generative AI You will work closely with our team of experts to research, explore and develop innovative solutions that will bring generative AI to the forefront of Google Cloud Platform (GCP) Data Analytics for our customers Google Cloud accelerates every organizations ability to digitally transform its business and industry We deliver enterprise-grade solutions that leverage Googles cutting-edge technology, and tools that help developers build more sustainably Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems Responsibilities Write and test product or system development code Collaborate with peers and stakeholders through design and code reviews to ensure best practices amongst available technologies (e g , style guidelines, checking code in, accuracy, testability, and efficiency,) Contribute to existing documentation or educational content and adapt content based on product/program updates and user feedback Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on hardware, network, or service operations and quality Design and implement solutions in one or more specialized Machine Learning (ML) areas, leverage ML infrastructure, and demonstrate experience in a chosen field Google is proud to be an equal opportunity workplace and is an affirmative action employer We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status We also consider qualified applicants regardless of criminal histories, consistent with legal requirements See also Google's EEO Policy and EEO is the Law If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies