Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Data Scientist specializing in Generative AI, you will be responsible for leading technical development and mentoring teams. You must have a Master's degree in Computer Science, Data Science, or a related field, with a Ph.D. preferred. Additionally, you should have 8-12 years of experience in a Data Scientist or equivalent role, including at least 4 years of specialized experience in Generative AI. Your primary tasks will include working with financial data, applying NLP techniques, refining prompt engineering strategies for LLMs, collaborating with stakeholders, developing and testing Python code for GenAI solutions, integrating with vector databases, monitoring MLOps pipelines, researching emerging GenAI technologies, and troubleshooting and debugging GenAI models in production. It is crucial to stay up-to-date with the rapidly evolving GenAI landscape and continuously learn and explore new tools and techniques. To excel in this role, you must possess demonstrable experience in the full lifecycle of real-world, production-level GenAI project implementation, including deploying, monitoring, and maintaining models in a live environment. Expert-level Python skills are mandatory, along with proficiency in key libraries for AI/ML and GenAI applications such as TensorFlow, PyTorch, Scikit-learn, XGBoost, and LightGBM. Preferred qualifications include experience with MLOps, cloud computing platforms, conversational AI solutions, and contributing to research and publications in Generative AI. You should also have expertise in prompt engineering, fine-tuning LLMs, optimizing performance, and designing scalable API architectures for GenAI applications. The role of Senior Data Scientist at our organization requires a highly motivated individual with a strong passion for Generative AI and a proven ability to drive technical innovation and deliver impactful results within a team setting. This job description provides a high-level overview of the role, and other job-related duties may be assigned as required.,
Posted 17 hours ago
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
About Us: At CLOUDSUFI, a Google Cloud Premier Partner, we are a Data Science and Product Engineering organization dedicated to building innovative products and solutions for the Technology and Enterprise industries. Our core belief is in the transformative power of data to drive business growth and facilitate better decision-making processes. With a unique blend of expertise in business processes and cutting-edge infrastructure, we collaborate with our clients to extract value from their data and optimize enterprise operations. Our Values: We are a team driven by passion and empathy, placing high importance on human values. Our mission is to enhance the quality of life for our employees, customers, partners, and the community at large. Equal Opportunity Statement: Role: Lead AI Engineer Location: Noida, Delhi/NCR (Hybrid) Experience: 5-10 years Role Overview: As a Senior Data Scientist / AI Engineer at CLOUDSUFI, you will play a pivotal role in our technical leadership team. Your primary responsibility will be to conceptualize, develop, and implement advanced AI and Machine Learning solutions, focusing on areas such as Generative AI and Large Language Models (LLMs). You will be tasked with designing and managing scalable AI microservices, leading research into cutting-edge techniques, and translating intricate business requirements into impactful products. This position demands a combination of profound technical proficiency, strategic thinking, and leadership qualities. Key Responsibilities: - Architect & Develop AI Solutions: Build and deploy robust and scalable machine learning models, emphasizing Natural Language Processing (NLP), Generative AI, and LLM-based Agents. - Build AI Infrastructure: Develop and manage AI-powered microservices utilizing frameworks like Python FastAPI to ensure optimal performance and reliability. - Lead AI Research & Innovation: Keep abreast of the latest AI/ML advancements, spearhead research endeavors to assess and implement state-of-the-art models and techniques for enhanced performance and cost efficiency. - Solve Business Problems: Collaborate with product and business teams to identify challenges and devise data-driven solutions that drive significant business value, such as constructing business rule engines or predictive classification systems. - End-to-End Project Ownership: Take charge of the complete lifecycle of AI projects from conceptualization, data processing, model development to deployment, monitoring, and continuous iteration on cloud platforms. - Team Leadership & Mentorship: Drive learning initiatives within the engineering team, provide guidance to junior data scientists and engineers, and establish best practices for AI development. - Cross-Functional Collaboration: Work closely with software engineers to seamlessly integrate AI models into production systems and contribute to the overall system architecture. Required Skills and Qualifications: - Master's (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field. - 6+ years of professional experience in roles such as Data Scientist, AI Engineer, or similar. - Proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn). - Hands-on experience in building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions. - Expertise in developing and deploying scalable systems on cloud platforms, particularly AWS, with experience in GCS as a bonus. - Strong background in Natural Language Processing (NLP), including multilingual models and transcription. - Familiarity with containerization technologies, specifically Docker. - Solid understanding of software engineering principles and experience in building APIs and microservices. Preferred Qualifications: - Strong project portfolio with a track record of publications in reputable AI/ML conferences. - Experience in full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch). - Knowledge of setting up and managing CI/CD pipelines (e.g., Jenkins). - Proven leadership skills in guiding technical teams and mentoring fellow engineers. - Experience in developing custom tools or packages for data science workflows.,
Posted 18 hours ago
12.0 - 16.0 years
0 Lacs
bhopal, madhya pradesh
On-site
We are looking for a highly motivated and detail-oriented Data Analyst with 12 years of experience to join our growing team. As a Data Analyst, you will be responsible for collecting, analyzing, and interpreting large datasets to help drive data-informed decisions across the organization. Your key responsibilities will include analyzing structured and unstructured data using SQL, Python, and Excel. You will also be creating dashboards and visualizations using Power BI and Google Sheets, developing and maintaining reports to track key metrics and business performance, and working with MongoDB to query and manage NoSQL data. Additionally, you will interpret data trends and patterns to support business strategy, collaborate with cross-functional teams to understand data needs, deliver insights, and apply statistical methods to solve real-world problems and validate hypotheses. Ensuring data accuracy, integrity, and security across platforms will also be part of your role. To be successful in this position, you should have 12 years of experience in a Data Analyst or similar role. Proficiency in Excel, including advanced formulas and pivot tables, as well as a strong knowledge of SQL for querying and data manipulation are essential. Hands-on experience with MongoDB and NoSQL databases, proficiency in Python for data analysis (Pandas, NumPy, etc.), and experience in building reports/dashboards using Power BI are required. You should also be skilled in using Google Sheets for collaboration and automation, possess strong logical thinking and problem-solving abilities, have a good understanding of statistics and data modeling techniques, and exhibit excellent communication skills and attention to detail. This is a full-time position with benefits including health insurance and Provident Fund. The work schedule is during the day shift and the work location is in person.,
Posted 18 hours ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
You will be responsible for Python development with expertise in Kubernetes/Docker containerization, API development, and working with both relational and NoSQL databases. Proficiency in version control using GitHub and experience with Airflow will be essential. Familiarity with Python libraries such as Pandas, Flask, and FastAPI is required. Additionally, you will have the opportunity to work as a Manual Tester or Automation Tester, particularly within the capital market domain. Those without capital market experience must possess automation skillsets and may need NCFM certification. Collaboration with Python and Prajakta will be part of your responsibilities. As a Python Developer/Manual Tester/Automation Tester in the Pharmaceuticals industry, you should have a solid educational background with at least a graduation degree. This position is full-time and permanent, requiring key skills in capital market, manual testing, Python, Prajakta, Kubernetes, Docker, API, GitHub, Pandas, and Flask. Job Code: GO/JC/760/2025 Recruiter Name: Sangeetha Tamil,
Posted 18 hours ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You should have a Bachelor's degree in Computer Science, Software Engineering, or a related field (or equivalent work experience). With 3-5 years of experience as a QA Automation Engineer or in a similar role. Your expertise should include Python Programming and pandas. You must possess a strong understanding of software testing principles, methodologies, and best practices. Proficiency in test management and bug tracking tools such as Jira and TestRail is essential. Familiarity with automated testing frameworks and tools is also required. Immediate joiners are preferred, and candidates with 3 years of relevant experience are highly encouraged to apply. It is important that your experience with pandas is clearly reflected in your resume. Kindly share your updated resumes for consideration.,
Posted 18 hours ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Full-Stack Developer at Intain, you will play a vital role in re-engineering structured finance using IntainAI & Ida. You will be responsible for building and tuning various models, including embeddings, transformers, and retrieval pipelines. Additionally, you will architect Python services using FastAPI/Flask to integrate ML/LLM workflows end-to-end. Your role will involve translating AI research into production features for data extraction, document reasoning, and risk analytics. You will have ownership of the full user flow, from back-end to front-end using React/TS, and managing CI/CD on Azure & Docker. Leveraging AI coding tools such as Copilot, Cursor, and Jules will be essential to meet our high productivity standards of 1 dev = 4 devs. The core stack you will work with includes Python, FastAPI/Flask, Pandas, SQL/NoSQL, Hugging Face, LangChain/RAG, REST/GraphQL, Azure, Docker, React.js, and vector DBs, with experience in Kubernetes considered a bonus. To excel in this role, you should have a proven track record of shipping Python features and training/serving ML or LLM models. Comfort with reading papers/blogs, prototyping ideas, and evaluating model performance will be crucial. A 360 product mindset focusing on tests, reviews, secure code practices, and quick iterations is highly valued. We prioritize a bias for ownership and output, as we believe impact surpasses years of experience. Joining Intain offers you the opportunity to be part of a small, expert team where your code and models make it to production swiftly. You will work on real-world AI challenges that underpin billions in structured-finance transactions. Your compensation and ESOPs are directly tied to the value you deliver, reflecting our commitment to recognizing your contributions. For more information about Intain, visit our website at www.intainft.com and read about us on https://medium.com/intain.,
Posted 19 hours ago
9.0 - 13.0 years
0 Lacs
karnataka
On-site
As a Data Scientist at our company, you will be an integral part of our growing team, utilizing your skills to harness data for solving complex problems, driving decision-making processes, and creating value. Your primary responsibilities will involve collaborating with cross-functional teams to analyze data, develop predictive models, and contribute to data-driven strategies that align with our business objectives. You will be responsible for various key tasks, including: - Data Analysis & Modeling: Extracting insights from large datasets and building predictive models to address business challenges. - Machine Learning Development: Designing, implementing, and evaluating machine learning models using frameworks like TensorFlow, PyTorch, or scikit-learn. - Data Wrangling: Cleaning, transforming, and preparing data from multiple sources for analysis and modeling. - AutoML Implementation: Utilizing AutoML tools to enhance model selection and hyperparameter tuning for improved performance and efficiency. - Cloud Computing: Leveraging cloud platforms such as AWS, Google Cloud, or Azure for data storage, processing, and deploying machine learning models. - Collaboration: Working closely with data engineers, product managers, and stakeholders to define project requirements and translate business needs into data solutions. - AI Impact Assessment: Evaluating and implementing AI technologies to improve data science processes and outcomes, staying updated on industry trends. - Documentation & Reporting: Documenting methodologies, model performance, and insights for stakeholders to ensure transparency and reproducibility in analytics processes. In addition to these responsibilities, the ideal candidate should possess the following mandatory technical and functional skills: - Proficiency in programming languages like Python or R and SQL. - Experience with machine learning and deep learning libraries such as TensorFlow, Keras, and Scikit-learn. - Strong knowledge of data wrangling tools and techniques like Pandas and NumPy. - Familiarity with cloud computing services, preferably Azure, for data storage and processing. - Understanding of AutoML tools and frameworks for automating model selection and hyperparameter tuning. - Experience with data visualization tools like Tableau, Matplotlib, and Seaborn for effective presentation of insights. - Knowledge of big data technologies like Hadoop and Spark is considered a plus. Preferred Technical & Functional Skills include: - Strong oral and written communication skills for effectively conveying technical and non-technical concepts to peers and stakeholders. - Ability to work independently with minimal supervision and escalate issues when necessary. - Capability to mentor junior developers and take ownership of project deliverables, not just individual tasks. - Understanding of business objectives and functions to support data needs effectively. This role is suitable for individuals with a Bachelor's degree in Computer Science or a related field, or equivalent work experience, along with a minimum of 9+ years of relevant experience.,
Posted 20 hours ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
You are invited to join our team as a Senior Python Developer specialized in AI/ML with a minimum of 6 years of relevant experience in Python and AI technologies. As a part of our dynamic team, you will be responsible for developing, deploying, and scaling AI/ML models using modern tools and frameworks. Your role will require proficiency in Python and experience with essential Python libraries such as NumPy, Pandas, and scikit-learn, along with TensorFlow or PyTorch. You should possess a solid understanding of machine learning algorithms, model evaluation techniques, and data preprocessing for both structured and unstructured data. Familiarity with version control systems like Git, strong problem-solving skills, and the ability to work both independently and collaboratively are key attributes we are looking for. Desirable skills for this role include experience in Natural Language Processing (NLP), Computer Vision, and Time-series forecasting. Exposure to MLOps tools for model deployment and monitoring, familiarity with Generative AI models like GPT and LLaMA, and hands-on experience with Prompt engineering, fine-tuning, and embedding techniques for LLMs are highly valued. Additionally, understanding LLM frameworks such as LangChain, Haystack, and Transformers (Hugging Face), as well as vector databases like Postgres and Pinecone, especially in RAG (Retrieval-Augmented Generation) pipelines, will be beneficial for this role. We also appreciate contributions to open-source AI/ML projects or published research papers. If you have the required skills and experience, along with a passion for AI/ML technologies, we would like to hear from you.,
Posted 20 hours ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
We are seeking a skilled Python AI/ML Engineer with a strong passion for developing AI/ML solutions and contributing to ML pipelines. The ideal candidate will have in-depth knowledge of traditional and deep learning concepts, hands-on programming skills, experience in enterprise-grade software engineering, and a good understanding of MLOps practices. As a Python AI/ML Engineer, your responsibilities will include designing, developing, and deploying scalable machine learning models for various tasks such as classification, regression, NLP, and generative tasks. You will be tasked with building and optimizing data transformation workflows using Python and Pandas, leading AI/ML project pipelines from data ingestion to model deployment and monitoring, and implementing model observability and monitoring for drift. Additionally, you will be involved in developing REST APIs and integrating ML models with production systems using frameworks like FastAPI. Participation in code reviews, writing unit/integration tests, and ensuring high code quality will be essential. Collaborating with cross-functional teams including Data Engineers, DevOps, and Product Managers is a key aspect of this role. To excel in this position, you must have advanced proficiency in Python and its libraries such as Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch. A strong understanding of asynchronous programming, FastAPI, and concurrency is required. Solid grasp of traditional ML concepts, experience with deep learning, and familiarity with MLOps tools are also necessary skills. Experience with REST API development, integration testing, CI/CD practices, containerization tools like Docker, and cloud-based ML deployment will be beneficial. Furthermore, the ability to perform data transformation and aggregation tasks using Python/Pandas is essential for success in this role. Having experience in GenAI, working with LLMs, and exposure to tools like MLflow, Kubeflow, Airflow, or similar MLOps platforms are considered advantageous. Any prior contributions to open-source ML tools or GitHub repositories will be a plus. If you have a strong background in Python, AI/ML, deep learning, MLOps practices, and are enthusiastic about developing innovative solutions, we encourage you to apply for this position and be part of our dynamic team.,
Posted 20 hours ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Full Stack Python Developer in our early-stage fintech team, you will be responsible for building an intelligent financial modeling platform that encompasses backend, frontend, and middleware development. Your role as an individual contributor will involve taking ownership of various aspects of the platform to ensure its scalability and seamless integration of financial datasets and compliance-driven logic. Your primary responsibilities will include developing scalable Python-based backend systems, creating interactive frontend applications using React.js or Next.js, maintaining middleware services and APIs, integrating AI/ML techniques for enhanced analytics and forecasting, translating complex business logic into modular components, and overseeing the full lifecycle of the project from architecture to deployment. Collaboration with product and strategy teams will be essential to ensure alignment with business objectives. To excel in this role, you should have at least 3 years of experience in Python 3.x, proficiency in libraries like NumPy, Pandas, and SciPy, proven frontend development skills with React.js or Next.js, backend experience with FastAPI, Flask, or Django, and a strong understanding of RESTful APIs, middleware integration, and microservices. Exposure to AI/ML frameworks such as Scikit-learn, TensorFlow, or PyTorch will be advantageous. Additionally, familiarity with Git, Docker, and CI/CD tools is desired, along with a startup mindset characterized by self-drive, agility, and outcome orientation. In terms of compensation, your salary will be commensurate with your level of knowledge and expertise, reflecting our emphasis on valuing depth of skills, innovative thinking, and the ability to drive impactful results. We offer you the autonomy to develop a full-stack fintech product independently, with flexible remote collaboration opportunities alongside our Chennai-based team. Our work culture is centered around experimentation, trust, and rapid iteration, providing you with a supportive environment to thrive and contribute effectively.,
Posted 20 hours ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Join us to lead data modernization and maximize analytics utility. As a Data Owner Lead at JPMorgan Chase within the Data Analytics team, you play a crucial role in enabling the business to drive faster innovation through data. You are responsible for managing customer application and account opening data, ensuring its quality and protection, and collaborating with technology and business partners to execute data requirements. In this role, you will document data requirements for your product and coordinate with technology and business partners to manage change from legacy to modernized data. You will model data for efficient querying and use in LLMs, utilizing the business data dictionary and metadata. Additionally, you will develop ideas for data products by understanding analytics needs and create prototypes for productizing datasets. You will also be responsible for developing proof of concepts for natural language querying and collaborating with stakeholders to rollout capabilities. Supporting the team in building backlog, grooming initiatives, and leading data engineering scrum teams will also be part of your responsibilities. Furthermore, you will manage direct or matrixed staff to execute data-related tasks efficiently. To be successful in this role, you must hold a Bachelor's degree and have at least 5 years of experience in data modeling for relational, NoSQL, and graph databases. Expertise in data technologies such as analytics, business intelligence, machine learning, data warehousing, data management & governance, and AWS cloud solutions is crucial. Experience with natural language processing, machine learning, and deep learning toolkits (like TensorFlow, PyTorch, NumPy, Scikit-Learn, Pandas) is also required. You should have the ability to balance short-term goals and long-term vision in complex environments. Knowledge of open data standards, data taxonomy, vocabularies, and metadata management is essential for this role. A Master's degree is preferred for this position to further enhance your qualifications and capabilities.,
Posted 22 hours ago
3.0 - 7.0 years
0 Lacs
bhopal, madhya pradesh
On-site
As a Data Science and Analytics Trainer at our organization based in Bhopal, you will play a crucial role in imparting knowledge and skills to aspiring data science and analytics professionals. With a minimum of 3 years of experience in the field, you will conduct comprehensive training sessions focusing on various data science tools, techniques, and their practical applications. Your primary objective will be to equip learners with a strong foundation in areas such as data science, machine learning, statistics, and business analytics. Your responsibilities will include designing and delivering engaging training on Python/R for data science, data wrangling and visualization using tools like Pandas, Matplotlib, and Seaborn, statistical analysis, machine learning concepts including Supervised, Unsupervised, and NLP, SQL for data querying, and business analytics for decision-making. You will guide learners through hands-on projects, capstone assignments, and case studies, while evaluating their progress through quizzes, assignments, and mock interviews. Additionally, you will contribute to curriculum design, update training materials, conduct doubt-clearing sessions, and provide one-on-one support to learners. To excel in this role, you should hold a Bachelors/Masters degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. Proficiency in Python, R, SQL, Excel, and data visualization tools is essential, along with hands-on experience in machine learning libraries such as Scikit-learn, TensorFlow, or Keras. Moreover, a solid understanding of statistics, probability, and analytical problem-solving is crucial. Prior experience in teaching, training, or coaching is preferred, coupled with excellent communication and presentation skills to simplify complex technical concepts effectively. Ideally, you would possess industry certifications like Google Data Analytics, Microsoft Data Scientist Associate, or similar, and have exposure to Big Data tools like Spark, Hadoop, or Cloud platforms such as AWS, GCP, or Azure. Knowledge of BI tools like Tableau and Power BI, along with familiarity with project lifecycle, model deployment, and version control using Git, would be advantageous. Joining our team will provide you with an opportunity to shape the future data scientists and analysts in a collaborative and growth-oriented work environment. You can expect continuous learning and development opportunities, flexible work arrangements, and competitive remuneration. Stay abreast of the latest tools, trends, and technologies in data science and analytics as you contribute to the professional development of our learners.,
Posted 22 hours ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You will be joining the Model Risk Governance and Review Group (MRGR) in Bengaluru, where you will have the opportunity to engage in new model validation activities for all Data Science models related to artificial intelligence and machine learning across the Corporate and Investment Banking (CIB) sector. Your primary responsibilities will include evaluating the conceptual soundness of model specifications, assessing the reliability of inputs, conducting independent testing, and providing oversight on the usage and limitations of models. Your role will involve collaborating with Model Developers, Model Users, Risk, and Finance professionals to ensure that models are used appropriately within the business context. You will also be responsible for performing additional model review activities, maintaining the model risk control apparatus, and staying updated on the latest developments in the field of Data Science. To be successful in this position, you should hold a Ph.D. or Masters degree in a Data Science-oriented field such as Data Science, Computer Science, or Statistics. Additionally, you should have 2-4+ years of prior experience in areas such as Data Science, Quantitative Model Development, Model Validation, or Technology focused on Data Science, with hands-on experience in building and testing machine learning models. A strong understanding of Machine Learning and Data Science theory, techniques, and tools is essential, including knowledge of Transformers, Large Language Models, NLP, GANs, Deep Learning, OCR, XGBoost, and Reinforcement Learning. Proficiency in Python programming and experience with machine learning libraries such as Numpy, Scipy, Scikit-learn, Theano, TensorFlow, Keras, PyTorch, and Pandas is required. Excellent writing and communication skills are crucial for this role, as you will be expected to write scientific texts, present logical reasoning clearly, and effectively interface with various functional areas within the organization on model-related issues. A risk and control mindset, with the ability to ask incisive questions, assess materiality, and escalate issues when necessary, is also highly valued.,
Posted 22 hours ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Do you have a curious mind, want to be involved in the latest technology trends, and like to solve problems that have a meaningful benefit to hundreds of users across the bank Join our Tech Services- Group Chief Technology Office team and become a core contributor for the execution of the bank's global AI Strategy, particularly to help the bank deploy AI models quickly and efficiently! We are looking for an experienced Data Engineer or ML Engineer to drive the delivery of an innovative ecosystem of tools and services. In this AI-focused role, you will contribute to the development of an SDK for Data Producers across the firm to build high-quality autonomous Data Products for cross-divisional consumption and Data Consumers (e.g. Data Scientists, Quantitative Analysts, Model Developers, Model Validators, and AI agents) to easily discover, access data, and build AI use-cases. Responsibilities include direct interaction with product owners and internal users to identify requirements, development of technical solutions and execution. Developing an SDK (Software Development Kit) to automatically capture Data Product, Dataset, and AI / ML model metadata. Also, leveraging LLMs to generate descriptive information about assets. Integration and publication of metadata into UBS's AI Use-case inventory, model artifact registry, and Enterprise Data Mesh data product and dataset catalogue for discovery and regulatory compliance purposes. Design and implementation of services that seamlessly collect runtime evidence and operational information about a data product or model and publish it to appropriate visualization tools. Creation of a collection of starters/templates that accelerate the creation of new data products by leveraging a collection of the latest tools and services and providing diverse and rich experiences to the Devpod ecosystem. Design and implementation of data contract and fine-grained access mechanisms to enable data consumption on a "need to know" basis. You will be part of the Data Product Framework team, which is a newly established function within Group Chief Technology Office. We provide solutions to help the firm embrace Artificial Intelligence and Machine Learning. We work with the divisions and functions of the firm to provide innovative solutions that integrate with their existing platforms to provide new and enhanced capabilities. One of our current aims is to help a data scientist get a model into production in an accelerated timeframe with the appropriate controls and security. We offer a number of key capabilities: data discovery that uses AI/ML to help users find data and obtain access in a secure and controlled manner, an AI Inventory that describes the models that have been built to help users build their own use cases and validate them with Model Risk Management, a containerized model development environment for a user to experiment and produce their models, and a streamlined MLOps process that helps them track their experiments and promote their models. Requirements include a PHD or Masters degree in Computer Science or any related advanced quantitative discipline, 5+ years industry experience with Python / Pandas, SQL / Spark, Azure fundamentals / Kubernetes, and Gitlab. Additional experience in data engineering frameworks (Databricks / Kedro / Flyte), ML frameworks (MLFlow / DVC), and Agentic Frameworks (Langchain, Langgraph, CrewAI) is a plus. Ability to produce secure and clean code that is stable, scalable, operational, and well-performing. Be up to date with the latest IT standards (security, best practices). Understanding the security principles in the banking systems is a plus. Ability to work independently, manage individual project priorities, deadlines, and deliverables. Willingness to quickly learn and adopt various technologies. Excellent English language written and verbal communication skills. UBS is the world's largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management, and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills, and experiences within our workforce.,
Posted 23 hours ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
NTT DATA is looking for a Senior Python Engineer to join the C3 Data Warehouse team in Bangalore, Karnataka, India. In this role, you will be responsible for contributing to the development of a unified data pipeline framework using Python, Airflow, DBT, Spark, and Snowflake. You will work closely with various teams to implement the data platform and pipeline framework. Key Responsibilities: - Develop components in Python for the unified data pipeline framework. - Establish best practices for efficient usage of Snowflake. - Test and deploy the data pipeline framework using standard testing frameworks and CI/CD tooling. - Monitor query performance and data loads, tuning as necessary. - Provide assistance during QA & UAT phases to resolve potential issues effectively. Minimum Skills Required: - 5+ years of experience in data development in complex environments with large data volumes. - 5+ years of experience developing data pipelines and warehousing solutions with Python, Pandas, NumPy, PySpark, etc. - 3+ years of experience in hybrid data environments (on-Prem and Cloud). - Exposure to Power BI and Snowflake. Join NTT DATA, a global innovator in business and technology services with a commitment to helping clients innovate, optimize, and transform for long-term success. As a Global Top Employer, we have experts in over 50 countries and a robust partner ecosystem. Our services include consulting, data and AI, industry solutions, and application development. NTT DATA is a part of the NTT Group, investing over $3.6 billion annually in R&D for a confident digital future. Visit us at us.nttdata.com.,
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Senior Python Developer, you will be responsible for designing, developing, and maintaining scalable Python applications and AI-powered solutions. Your primary focus will be collaborating with cross-functional teams to understand requirements and create, train, deploy, and optimize machine learning and AI applications. You will integrate AI/ML models into production environments with robust error handling, logging, and monitoring. In this role, you will implement and enhance data preprocessing pipelines to ensure data quality for training and inference. Your responsibilities will include conducting research to explore new AI/ML technologies, frameworks, and practices to enable teams. You will be expected to write clean, testable, and efficient code following best practices for software development and debug and improve the performance, scalability, and reliability of Python-based applications. To succeed in this position, you must have proficiency in Python and its libraries/frameworks such as TensorFlow, PyTorch, scikit-learn, NumPy, and Flask/Django. Hands-on experience with ML algorithms including supervised and unsupervised learning, reinforcement learning is required. Additionally, knowledge of natural language processing (NLP), computer vision, or deep learning techniques is preferred. Familiarity with AI-driven tools and architectures is a plus. A solid understanding of data structures, preprocessing techniques, and feature engineering is essential for this role. Experience deploying AI/ML models using frameworks like Docker, Kubernetes, or AWS/GCP/Azure cloud services will be beneficial. If you are someone who is passionate about Python development and AI technologies, and possess the required skills and experience, we encourage you to apply for this position and be a part of our dynamic team.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Senior Frontend Engineer at GobbleCube, you will play a crucial role in both frontend development and backend-for-frontend services. Your primary responsibility will be to create intuitive user interfaces while managing API interactions, data processing, and backend integrations to ensure smooth performance. You will be tasked with developing and optimizing high-performance, responsive front-end applications. Additionally, you will implement and manage a Backend-for-Frontend (BFF-Python App) layer to efficiently handle API interactions. Proficiency in React.js, state management libraries such as Redux, Recoil, and Zustand, as well as modern frontend tooling, will be essential for this role. Furthermore, you will integrate and optimize backend services for efficient data retrieval, write Python scripts and Pandas workflows for data transformations, and process SQL queries for fast and efficient data visualization on the front end. Ensuring code quality, security, and maintainability through reviews and best practices will also be part of your responsibilities. The ideal candidate for this position should have a strong product mindset with a focus on user experience and analytics-driven design. You should possess at least 3 years of front-end development experience and proficiency in React.js, JavaScript/TypeScript, HTML, and CSS. Additionally, experience with state management libraries, Python, Pandas, SQL, API integration, RESTful services, and GraphQL will be beneficial. Joining GobbleCube will offer you the opportunity to work on high-impact, data-driven applications that are used by businesses, allowing you to take ownership of features that drive innovation in the frontend space. You will be part of a fast-moving team that emphasizes agile development, speed, and innovation, where your contributions will directly impact the product. As a key player in the team, you will have the chance to own major components of the platform and shape the direction of the frontend strategy. Moreover, you will enjoy a competitive salary, ESOPs, and opportunities to grow your stake in the company.,
Posted 1 day ago
0.0 - 4.0 years
0 Lacs
maharashtra
On-site
As a candidate for this full-time position based in Mahape, Navi Mumbai, you are expected to have a certain level of technical proficiency and familiarity with various programming languages and web development concepts. While freshers are not required to be experts, it is essential to showcase hands-on experience in the following areas. Firstly, you should have a basic exposure to programming languages such as Node.js, Python, Java, Git, and SQL/NoSQL databases like Postgres and MongoDB. Additionally, a good understanding of web development fundamentals including HTML5, CSS3, JavaScript, and familiarity with React.js or Angular is preferred. Moreover, you are expected to be acquainted with RESTful APIs, exposure to AI/ML concepts, and experience with Python ML libraries like NumPy, Pandas, Scikit-learn, FastAPI, and Langchain. Knowledge of Machine Learning principles, supervised vs unsupervised learning, and basic concepts of LLMs or Prompt Engineering are also desirable. Familiarity with RAG (Retrieval-Augmented Generation) concepts, as well as having worked on AI/ML projects, either academically, in hackathons, or on personal GitHub repositories, will be advantageous. Awareness of cloud platforms such as AWS, Azure, or GCP is a plus. In terms of educational background, candidates with a degree in B.E., B.Tech, B.Sc, or MCA in Computer Science or related fields are preferred. Recent graduates from the 2023-2025 batches are encouraged to apply, with a strong emphasis on academic projects, open-source contributions, or participation in coding competitions. If you meet these requirements and are interested in this opportunity, please share your resume with tanaya.ganguli@arrkgroup.com. This is a full-time position, and the job posting was made on 05/08/2025.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You will require a minimum of 4 years of experience in Robot Framework with Python for this position. Your responsibilities will include working with libraries such as pandas, openpyxl, numpy, boto3, json, pypyodbc, and sqlite3. You will need to have hands-on experience in building keywords and test cases using Robot Framework. Additionally, you should be familiar with DevOps activities, pipeline design, and architecture. Your role will involve implementing data transformation techniques like filtering, aggregation, enrichment, and normalization. You will also be responsible for configuring and deploying pipelines, managing failures, and tracking pipeline execution through logging practices. Experience in GitHub activities and version control concepts is essential. You must have a good understanding of Git commands, branching and merging strategies, remote repositories (e.g., GitHub, GitLab, Bitbucket), and troubleshooting techniques. Moreover, a solid knowledge of AWS, specifically AWS S3 files upload and download functionality, is required for this position.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As an AI/ML Engineer, you will be responsible for designing and implementing end-to-end machine learning solutions in areas such as predictive analytics, anomaly detection, computer vision, and natural language processing. Your role will involve employing advanced methodologies like Deep Learning, Reinforcement Learning, and Generative Models to optimize model performance through techniques such as hyperparameter tuning. You will need to balance model accuracy with real-time inference requirements for production readiness. In addition, you will be involved in developing scalable ETL pipelines using tools like Pandas, NumPy, and PySpark to process large-scale and streaming data from various sources. Data preprocessing tasks such as cleansing, transformation, and feature engineering will also be part of your responsibilities to ensure model-ready datasets. Staying updated with the latest ML techniques and experimenting with algorithms like Transfer Learning and AutoML will be crucial for your role. You will also be expected to rapidly prototype and test proof-of-concept models to validate innovative approaches. Deployment of ML models using Docker and Kubernetes in cloud environments, setting up CI/CD pipelines for streamlined model updates, and implementing monitoring and alerting systems for performance tracking will be essential tasks. Collaborating closely with cross-functional teams, mentoring junior members, and enforcing coding standards will also be part of your responsibilities. Your tech stack will include Python, PyTorch, Hugging Face Transformers, scikit-learn, Apache Kafka, Spark, PostgreSQL, Docker, Kubernetes, GitLab CI/CD, Prometheus, Grafana, OpenCV, FFmpeg, and Tesseract OCR, among others. You should have at least 5 years of experience in ML/AI, strong Python skills, proficiency with ML frameworks, experience with NLP, data engineering expertise, deployment experience, cloud platform knowledge, and a research-oriented mindset. Soft skills such as analytical thinking, problem-solving, communication, and the ability to work in agile, cross-functional teams will be essential. A Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or a related field is required for this position.,
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a Python Solution Architect with over 10 years of experience, you will play a crucial role in designing and implementing scalable, high-performance software solutions that align with business requirements. Your expertise in Python frameworks (e.g., Django, Flask, FastAPI) will be instrumental in architecting efficient applications and microservices architectures. Your responsibilities will include collaborating with cross-functional teams to define architecture, best practices, and oversee the development process. You will be tasked with ensuring that Python solutions meet business goals, align with enterprise architecture, and adhere to security best practices (e.g., OWASP, cryptography). Additionally, your role will involve designing and managing RESTful APIs, optimizing database interactions, and integrating Python solutions seamlessly with third-party services and external systems. Your proficiency in cloud environments (AWS, GCP, Azure) will be essential for architecting solutions and implementing CI/CD pipelines for Python projects. You will provide guidance to Python developers on architectural decisions, design patterns, and code quality, while also mentoring teams on best practices for writing clean, maintainable, and efficient code. Preferred skills for this role include deep knowledge of Python frameworks, proficiency in asynchronous programming, experience with microservices-based architectures, and familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. Your understanding of relational and NoSQL databases, RESTful APIs, cloud services, CI/CD pipelines, and Infrastructure-as-Code tools will be crucial for success in this position. In addition, your experience with security tools and practices, encryption, authentication, data protection standards, and working in Agile environments will be valuable assets. Your ability to communicate complex technical concepts to non-technical stakeholders and ensure solutions address both functional and non-functional requirements will be key to delivering successful projects.,
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a Data Analyst specializing in Power BI and Python, you will be an integral part of our dynamic data analytics team in Bangalore. With 2-4 years of experience, your role will involve analyzing complex data sets, creating interactive visualizations, and generating actionable insights to support data-driven decision-making. Your responsibilities will include analyzing data to uncover trends and patterns, utilizing Python for data cleaning and advanced analysis, and developing and maintaining Power BI dashboards to visualize key performance indicators (KPIs) and metrics. You will collaborate with business units to understand their data requirements and deliver tailored data solutions, ensuring data accuracy and integrity through regular quality checks. In addition to your technical skills in Power BI, Python, SQL, and database management, you will need to have strong analytical and problem-solving abilities. Effective communication and teamwork skills are essential as you work closely with cross-functional teams to provide data-driven solutions. Continuous improvement and staying updated on the latest trends in data analytics and visualization will be key to your success in this role. To qualify for this position, you should have a Bachelor's degree in Data Science, Computer Science, Statistics, or a related field, along with 2-4 years of relevant experience. Certifications in data analytics are a plus. Your proven track record of working with large data sets and your ability to manage multiple tasks in a fast-paced environment will be highly valued. If you are detail-oriented, proactive, and passionate about leveraging data to drive business outcomes, we invite you to join our team and contribute to the development of data-driven strategies that will shape our future success.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
The Retail Specialized Data Scientist will play a pivotal role in utilizing advanced analytics, machine learning, and statistical modeling techniques to help the retail business make data-driven decisions. You will work closely with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. The ideal candidate should have experience in retail analytics and the ability to translate data into actionable insights. Key Responsibilities: - Leverage Retail Knowledge: Utilize your deep understanding of the retail industry (merchandising, customer behavior, product lifecycle) to design AI solutions that address critical retail business needs. - Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. - Apply machine learning algorithms, such as classification, clustering, regression, and deep learning, to enhance predictive models. - Use AI-driven techniques for personalization, demand forecasting, and fraud detection. - Utilize advanced statistical methods to optimize existing use cases and build new products to serve new challenges and use cases. - Stay updated on the latest trends in data science and retail technology. - Collaborate with executives, product managers, and marketing teams to translate insights into business actions. Professional & Technical Skills: - Strong analytical and statistical skills. - Expertise in machine learning and AI. - Experience with retail-specific datasets and KPIs. - Proficiency in data visualization and reporting tools. - Ability to work with large datasets and complex data structures. - Strong communication skills to interact with both technical and non-technical stakeholders. - A solid understanding of the retail business and consumer behavior. - Programming Languages: Python, R, SQL, Scala - Data Analysis Tools: Pandas, NumPy, Scikit-learn, TensorFlow, Keras - Visualization Tools: Tableau, Power BI, Matplotlib, Seaborn - Big Data Technologies: Hadoop, Spark, AWS, Google Cloud - Databases: SQL, NoSQL (MongoDB, Cassandra) Additional Information: - Job Title: Retail Specialized Data Scientist - Management Level: 09 - Consultant - Location: Bangalore / Gurgaon / Mumbai / Chennai / Pune / Hyderabad / Kolkata - Company: Accenture This position requires a solid understanding of retail industry dynamics, strong communication skills, proficiency in Python for data manipulation, statistical analysis, and machine learning, as well as familiarity with big data processing platforms and ETL processes. The Retail Specialized Data Scientist will be responsible for gathering, cleaning, and analyzing data to provide valuable insights for business decision-making and optimization of pricing strategies based on market demand and customer behavior.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Test Engineer, you will be responsible for supporting the AI & Data science teams in testing their AI/ML flows. You will analyze system specifications, develop detailed test plans and test cases, execute test cases, and identify defects. Your role includes documenting and reporting defects to the development team, collaborating with them to resolve issues, and ensuring that the software meets quality standards and best practices. Participation in review meetings to provide feedback is also part of your responsibilities. It is essential to have excellent knowledge of SDLC and STLC, along with expertise in Agile methodology. Your technical skills should include a strong understanding of Testing framework & automation concepts, as well as proficiency in Pandas, Python, Pytest, SQL, SparkSQL, Pyspark, and testing LLMs such as GPT, LLAMA, Gemma. Additionally, good database skills in any relational DB, hands-on experience with the Databricks Platform, and the ability to comprehend models and write Python scripts to test data inflow and outflow are required. You should also be proficient in Programming and Query Language, with knowledge of Cloud platforms, preferably Azure fundamentals and Azure analytics services. Writing test scripts from designs and expertise in Jira, Excel, and Confluence are important technical skills. In terms of soft skills, excellent verbal and written communication skills in English are necessary. You should be able to work both independently and as part of a team, demonstrating strong project leadership and communication skills, including customer-facing interactions. It would be beneficial to have skills in API testing, Test automation, Azure AI services, and any vector dB/Graph dB. Familiarity with ML NLP algorithms, entity mining & clustering, and sentiment analysis is also considered advantageous for this role.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As an AI Data Scientist at Johnson Controls located in Pune, India, you will be an integral part of the Data Strategy & Intelligence team. You will be responsible for developing and deploying machine learning/Generative AI and time series analysis models in production to contribute to optimized energy utilization, auto-generation of building insights, and predictive maintenance for installed devices. To excel in this role, you must possess a deep understanding of machine learning concepts, Large Language Models (LLM), time series models, and have hands-on experience in developing and deploying ML/Generative AI/time series models in a production environment. Your primary responsibilities will include developing and maintaining AI algorithms and capabilities within digital products at Johnson Controls. By leveraging data from commercial buildings and applying advanced algorithms, you will optimize building energy consumption, reduce CO2 emissions, enhance user comfort, and generate actionable insights to improve building operations. Additionally, you will translate data into recommendations for various stakeholders, ensuring that AI solutions deliver robust and repeatable outcomes through well-designed algorithms and software. Collaboration is key in this role, as you will work closely with product managers to design new AI capabilities, explore and analyze datasets, write Python code to develop ML/Generative AI/time series prediction solutions, implement state-of-the-art techniques in Generative AI, and pre-train and finetune ML models over CPU/GPU clusters. You will also uphold code-quality standards, develop test cases to validate algorithm correctness, and communicate key results to stakeholders effectively. The ideal candidate for this position should hold a Bachelor's/Master's degree in Computer Science, Statistics, Mathematics, or a related field, along with at least 5 years of experience in developing and deploying ML models. Proficiency in Python and standard ML libraries such as PyTorch, Tensorflow, and scikit-learn is essential. Moreover, a strong understanding of ML algorithms and techniques, experience in working with cloud-based ML/GenAI model development/deployment, and excellent communication skills are prerequisites for success in this role. Preferred qualifications include prior domain experience in smart buildings and building operations optimization, as well as experience working with Microsoft Azure Cloud.,
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for pandas professionals in India is on the rise as more companies are recognizing the importance of data analysis and manipulation in making informed business decisions. Pandas, a popular Python library for data manipulation and analysis, is a valuable skill sought after by many organizations across various industries in India.
Here are 5 major cities in India actively hiring for pandas roles: 1. Bangalore 2. Mumbai 3. Delhi 4. Hyderabad 5. Pune
The average salary range for pandas professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from ₹4-6 lakhs per annum, while experienced professionals can earn upwards of ₹12-18 lakhs per annum.
Career progression in the pandas domain typically involves moving from roles such as Junior Data Analyst or Data Scientist to Senior Data Analyst, Data Scientist, and eventually to roles like Tech Lead or Data Science Manager.
In addition to pandas, professionals in this field are often expected to have knowledge or experience in the following areas: - Python programming - Data visualization tools like Matplotlib or Seaborn - Statistical analysis - Machine learning algorithms
Here are 25 interview questions for pandas roles: - What is pandas in Python? (basic) - Explain the difference between Series and DataFrame in pandas. (basic) - How do you handle missing data in pandas? (basic) - What are the different ways to create a DataFrame in pandas? (medium) - Explain groupby() in pandas with an example. (medium) - What is the purpose of pivot_table() in pandas? (medium) - How do you merge two DataFrames in pandas? (medium) - What is the significance of the inplace parameter in pandas functions? (medium) - What are the advantages of using pandas over Excel for data analysis? (advanced) - Explain the apply() function in pandas with an example. (advanced) - How do you optimize performance in pandas operations for large datasets? (advanced) - What is method chaining in pandas? (advanced) - Explain the working of the cut() function in pandas. (medium) - How do you handle duplicate values in a DataFrame using pandas? (medium) - What is the purpose of the nunique() function in pandas? (medium) - How can you handle time series data in pandas? (advanced) - Explain the concept of multi-indexing in pandas. (advanced) - How do you filter rows in a DataFrame based on a condition in pandas? (medium) - What is the role of the read_csv() function in pandas? (basic) - How can you export a DataFrame to a CSV file using pandas? (basic) - What is the purpose of the describe() function in pandas? (basic) - How do you handle categorical data in pandas? (medium) - Explain the role of the loc and iloc functions in pandas. (medium) - How do you perform text data analysis using pandas? (advanced) - What is the significance of the to_datetime() function in pandas? (medium)
As you explore pandas jobs in India, remember to enhance your skills, stay updated with industry trends, and practice answering interview questions to increase your chances of securing a rewarding career in data analysis. Best of luck on your job search journey!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19405 Jobs | Bengaluru
Accenture in India
15976 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11281 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France