Home
Jobs

652 Pandas Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

6 - 12 Lacs

Mumbai

Work from Office

Naukri logo

Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Machine Learning, NumPy, Data Cleaning, Python, Model evaluation, pandas, Statistics We are seeking a talented Data Scientist II to join our team. The ideal candidate will have 2-5 years of experience in data science and possess expertise in machine learning, deep learning, Python programming, SQL, Amazon Redshift, NLP, and AWS Cloud. ** Duties and Responsibilities: Develop and implement machine learning models to extract insights from large datasets. Utilize deep learning techniques to enhance data analysis and predictive modeling. Write efficient Python code to manipulate and analyze data. - Work with SQL databases to extract and transform data for analysis. Utilize Amazon Redshift for data warehousing and analytics. Apply NLP techniques to extract valuable information from unstructured data. - Utilize AWS Cloud services for data storage, processing, and analysis. Qualifications and Requirements: Bachelor's degree in Computer Science, Statistics, Mathematics, or related field. - 3-5 years of experience in data science or related field. Proficiency in machine learning, deep learning, Python programming, SQL, Amazon Redshift, NLP, and AWS Cloud. Strong analytical and problem-solving skills. - Excellent communication and teamwork abilities. * *Key Competencies - Strong analytical skills. - Problem-solving abilities. - Proficiency in machine learning and deep learning techniques. Excellent programming skills in Python. - Knowledge of SQL and database management. - Familiarity with Amazon Redshift, NLP, and AWS Cloud services. ** Performance Expectations: Develop and deploy advanced machine learning models. Extract valuable insights from complex datasets. Collaborate with cross-functional teams to drive data-driven decision-making. Stay updated on the latest trends and technologies in data science. We are looking for a motivated and skilled Data Scientist I to join our team and contribute to our data-driven initiatives. If you meet the qualifications and are passionate about data science, we encourage you to apply.

Posted 2 hours ago

Apply

2.0 - 7.0 years

15 - 25 Lacs

Pune

Work from Office

Naukri logo

Experience: 2 + years Expected Notice Period: 30 Days Shift: (GMT+05:30) Asia/Kolkata (IST) Opportunity Type: Office (Pune) Placement Type: Full Time Permanent position Must have skills required: Airflow, LLMs, NLP, Statistical Modeling, Predictive Analysis, Forecasting, Python, SQL, MLFlow, pandas, Scikit-learn, XgBoost As an ML / Data Science Engineer at Anervea, youll work on designing, training, deploying, and maintaining machine learning models across multiple products. Youll build models that predict clinical trial outcomes, extract insights from structured and unstructured healthcare data, and support real-time scoring for sales or market access use cases. Youll collaborate closely with AI engineers, backend developers, and product owners to translate data into product features that are explainable, reliable, and impactful. Key Responsibilities Develop and optimize predictive models using algorithms such as XGBoost, Random Forest, Logistic Regression, and ensemble methods Engineer features from real-world healthcare data (clinical trials, treatment adoption, medical events, digital behavior) Analyze datasets from sources like ClinicalTrials.gov, PubMed, Komodo, Apollo.io, and internal survey pipelines Build end-to-end ML pipelines for inference and batch scoring Collaborate with AI engineers to integrate LLM-generated features with traditional models Ensure explainability and robustness of models using SHAP, LIME, or custom logic Validate models against real-world outcomes and client feedback Prepare clean, structured datasets using SQL and Pandas Communicate insights clearly to product, business, and domain teams Document all processes, assumptions, and model outputs thoroughly Technical Skills Required : Strong programming skills in Python (NumPy, Pandas, scikit-learn, XGBoost, LightGBM) Experience with statistical modeling and classification algorithms Solid understanding of feature engineering, model evaluation, and validation techniques Exposure to real-world healthcare, trial, or patient data (strong bonus) Comfortable working with unstructured data and data cleaning techniques Knowledge of SQL and NoSQL databases Familiarity with ML lifecycle tools (MLflow, Airflow, or similar) Bonus: experience working alongside LLMs or incorporating generative features into ML Bonus: knowledge of NLP preprocessing, embeddings, or vector similarity methods Personal Attributes : Strong analytical and problem-solving mindset Ability to convert abstract questions into measurable models Attention to detail and high standards for model quality Willingness to learn life sciences concepts relevant to each use case Clear communicator who can simplify complexity for product and business teams Independent learner who actively follows new trends in ML and data science Reliable, accountable, and driven by outcomesnot just code Bonus Qualities : Experience building models for healthcare, pharma, or biotech Published work or open-source contributions in data science Strong business intuition on how to turn models into product decisions

Posted 2 hours ago

Apply

2.0 - 4.0 years

2 - 8 Lacs

Jaipur

Work from Office

Naukri logo

Responsibilities: Work on end-to-end API integrations (REST, WebSocket) Implement and optimize data pipelines using Pandas, NumPy Use DSA to solve real-world performance-critical problems Handle database interaction (SQLite, PostgreSQL, or MongoDB)

Posted 2 hours ago

Apply

3.0 - 6.0 years

9 - 19 Lacs

Hyderabad

Hybrid

Naukri logo

Were looking for a Python-based AI/ML Developer who brings solid hands-on experience in building machine learning models and deploying them into scalable, production-ready APIs using FastAPI or Django. The ideal candidate is both analytical and implementation-savvy, capable of transforming models into live services and integrating them with real-world systems. Key Responsibilities Design, train, and evaluate machine learning models (classification, regression, clustering, etc.) Build and deploy scalable REST APIs for model serving using FastAPI or Django Collaborate with data scientists, backend developers, and DevOps to integrate models into production systems Develop clean, modular, and optimized Python code using best practices Perform data preprocessing, feature engineering, and data visualization using Pandas , NumPy , Matplotlib , and Seaborn Implement model serialization techniques (Pickle, Joblib, ONNX) and deploy models using containers (Docker) Manage API security with JWT and OAuth mechanisms Participate in Agile development with code reviews, Git workflows, CI/CD pipelines Must-Have Skills Python & Development Proficient in Python 3.x, OOP, and clean code principles Experience with Git, Docker, debugging, unit testing AI/ML Good grasp of supervised/unsupervised learning, model evaluation, and data wrangling Hands-on with Scikit-learn , XGBoost , LightGBM Web Frameworks FastAPI : API routes, async programming, Pydantic, JWT Django : REST Framework, ORM, Admin panel, Middleware DevOps & Cloud Experience with containerized deployment using Docker Exposure to cloud platforms: AWS , Azure , or GCP CI/CD with GitHub Actions, Jenkins, or GitLab CI Databases SQL: PostgreSQL, MySQL NoSQL: MongoDB, Redis ORM: Django ORM, SQLAlchemy Bonus/Nice-to-Have Skills Model tracking/versioning tools (MLflow, DVC) Knowledge of LLMs , transformers, vector DBs (Pinecone, Faiss) Airflow, Prefect, or other workflow automation tools Basic frontend skills (HTML, JavaScript, React) Requirements Education : B.E./B.Tech or M.E./M.Tech in Computer Science, Data Science, or related fields Experience : 36 years of industry experience in ML development and backend API integration Strong communication skills and ability to work with cross-functional teams Role & responsibilities

Posted 2 hours ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Hyderabad

Hybrid

Naukri logo

Responsibilities and Duties Add support for new platforms to our existing products and develop new products. Develop and review designs, code, unit tests, system tests, and documentation. Collaborate in root cause analysis; diagnose, isolate, and fix software problems. Create backend applications using primarily Python Demonstrate your work product to your team. Identify and correct issues that impact performance, reliability, and scalability. Investigate and develop skills in new technologies. Characteristics Extensive knowledge of Python for asynchronous, backend application development Working knowledge of the software development lifecycle to include agile methodologies, code quality, and continuous integration/continuous delivery. Driven to build modern systems that emphasize user performance and scalibility A team player, who sees software quality as your responsibility Excellent writing and written/verbal communication skills. An eagerness to learn, explore and introduce new technologies. On-call shifts may be required Education & Experience 8+ years work experience in software engineering with considerable experience programming in Python (or similar object-oriented language) with a focus on asynchronous programming Experience with API development, and ideally data ingestion Prior work on distributed systems, and event-driven architecture knowledge is a big plus and will be very helpful on your day to day. Experience with Docker and Jenkins (or similar CI toolset) Dedication to contributing unit tests and other testware with product code. Experience consuming RESTful interfaces and implementing security good practices Familiarity with NoSQL databases and ElasticSearch/OpenSearch, and knowledge of cloud computing platforms is a plus

Posted 3 hours ago

Apply

2.0 - 6.0 years

2 - 7 Lacs

Coimbatore

Work from Office

Naukri logo

Responsibilities: Design, develop, test & maintain Python applications using Django/Flask frameworks on AWS cloud Platform. Collaborate with cross-functional teams to deliver high-quality software solutions. Need to Know Rest APIs, GitHub/BitBucket. Health insurance Provident fund Food allowance

Posted 4 hours ago

Apply

7.0 - 12.0 years

25 - 40 Lacs

Gurugram

Remote

Naukri logo

Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime YoE: 7 to 10 years relevant experience Shift: 6.30pm to 2.30am IST Job Purpose: The Senior Data Engineer designs, builds, and maintains scalable data pipelines and architectures to support the Denials AI workflow under the guidance of the Team Lead, Data Management. This role ensures data is reliable, compliant with HIPAA, and optimized. Duties & Responsibilities: Collaborate with the Team Lead and crossfunctional teams to gather and refine data requirements for Denials AI solutions. Design, implement, and optimize ETL/ELT pipelines using Python, Dagster, DBT, and AWS data services (Athena, Glue, SQS). Develop and maintain data models in PostgreSQL; write efficient SQL for querying and performance tuning. Monitor pipeline health and performance; troubleshoot data incidents and implement preventive measures. Enforce data quality and governance standards, including HIPAA compliance for PHI handling. Conduct code reviews, share best practices, and mentor junior data engineers. Automate deployment and monitoring tasks using infrastructure-as-code and AWS CloudWatch metrics and alarms. Document data workflows, schemas, and operational runbooks to support team knowledge transfer. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 5+ years of handson experience building and operating productiongrade data pipelines. Solid experience with workflow orchestration tools (Dagster) and transformation frameworks (DBT) or other similar tools such (Microsoft SSIS, AWS Glue, Air Flow). Strong SQL skills on PostgreSQL for data modeling and query optimization or any other similar technologies (Microsoft SQL Server, Oracle, AWS RDS). Working knowledge with AWS data services: Athena, Glue, SQS, SNS, IAM, and CloudWatch. Basic proficiency in Python and Python data frameworks (Pandas, PySpark). Experience with version control (GitHub) and CI/CD for data projects. Familiarity with healthcare data standards and HIPAA compliance. Excellent problemsolving skills, attention to detail, and ability to work independently. Strong communication skills, with experience mentoring or leading small technical efforts.

Posted 5 hours ago

Apply

5.0 - 8.0 years

18 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Hiring a Python Developer with 5+ years of experience and proven expertise in the BFSI sector. Must have strong skills in Python, Django/Flask, SQL, and APIs. Experience with data pipelines, ETL tools, and cloud is a plus. BFSI background is a must. Required Candidate profile Python developer with strong BFSI experience, proficient in Pandas, NumPy, SQLAlchemy, and Spark. Skilled in Django/Flask, SQL/NoSQL databases, ETL tools, APIs, and containerized environments.

Posted 5 hours ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management.

Posted 6 hours ago

Apply

5.0 - 8.0 years

7 - 17 Lacs

Chennai

Work from Office

Naukri logo

Key Responsibilities : Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB, etc.). Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Perform data manipulation and analysis using Pandas, NumPy, and related Python libraries Develop and maintain high-performance REST APIs using FastAPI or Flas Ensure data integrity, quality, and availability across various sources Integrate data workflows with application components to support real-time or scheduled processes Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Communicate technical concepts effectively to non-technical stakeholders Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. Expertise in data modeling and design (relational, dimensional, NoSQL). Proven experience with ETL/ELT processes, data lakes, and modern lake house architectures. Solid experience with SQL Server or any major RDBMS; ability to write complex queries and stored procedures 3+ years of experience with Azure Data Factory, Azure Databricks, and PySpark Strong programming skills in Python, with solid understanding of Pandas and NumPy Proven experience in building REST APIs Good knowledge of data formats (JSON, Parquet, Avro) and API communication patterns Strong knowledge of data governance, security, and compliance frameworks. Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). Familiarity with BI and analytics tools such as Power BI or Tableau. Strong problem-solving skills and attention to performance, scalability, and security Excellent communication skills both written and verbal Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership.

Posted 6 hours ago

Apply

2.0 - 3.0 years

2 - 6 Lacs

Chennai

Work from Office

Naukri logo

Required Skills: Strong verbal and written communication skills in English Strong analytical mindset and problem-solving abilities Ability to monitor and audit data quality and critique content. Experience creating data dashboards, graphs and visualizations Ability to manipulate, analyse and interpret complex datasets relating to focus markets. Ability to handle a variety of ongoing projects and triage incoming tasks 3+ years experience of working with SQL - PLSQL with advanced knowledge 2+ years of experience in Python , data manipulation using Pandas. 3+ Years experience working in a BI or Data Analytics field 3+ years experience of visualising data with at least one of the following BI tools - AWS Quicksight, Power BI, Tableau. In addition, the following experience would be desirable: Knowledge of database management systems, including some data engineering. Knowledge of statistics and statistical analysis. Knowledge of ETL and workflow orchestration using Apache Airflow (DAGs) Knowledge of data modelling tools like DBT. Responsibilities : The job roles and responsibilities are, Present data and dashboards that bring value to the business, always in a format that is appropriate for the intended audience. Utilize strong database skills working with large, complex datasets. Identify patterns and trends in data, working alongside teams within operations, the wider business and the senior management team to provide insight. Filter and cleanse unstructured (or ambiguous) data into usable datasets that can be analysed to extract insights and improve business processes

Posted 6 hours ago

Apply

0.0 - 3.0 years

9 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

As an AI Engineering Intern , you will work closely with our AI research and development team to build, test, and deploy machine learning models and AI solutions. You will gain hands-on experience with various AI techniques and technologies, helping to develop and improve AI-powered systems. Responsibilities: Assist in the development and optimization of machine learning models and algorithms. Support data preprocessing, cleaning, and analysis for AI-related projects. Collaborate with the AI team to implement and integrate AI solutions into production systems. Contribute to the design and development of AI systems, including NLP, computer vision, or other domains based on project needs. Help in writing clean, scalable, and well-documented code for AI applications. Participate in the testing and validation of AI models, and identify areas for improvement. Stay up-to-date with the latest advancements in AI and machine learning technologies. Qualifications: Currently pursuing a degree in Computer Science, Engineering, Mathematics, or a related field (preferably at the undergraduate or graduate level). Solid understanding of machine learning concepts and algorithms (e.g., supervised learning, unsupervised learning, deep learning, etc.). Familiarity with programming languages such as Python, R, or similar. Experience with machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) is a plus. Strong problem-solving skills and analytical thinking. Ability to work independently as well as part of a collaborative team. Good communication skills, with the ability to present ideas and technical concepts clearly. Preferred Qualifications: Experience with cloud platforms such as AWS, GCP, or Azure. Familiarity with data wrangling and data visualization tools (e.g., Pandas, Matplotlib, Seaborn). Knowledge of advanced AI topics such as reinforcement learning, generative models, or NLP. Exposure to version control systems (e.g., Git). Benefits: Mentorship from experienced AI engineers. Hands-on experience with state-of-the-art AI technologies. Opportunity to contribute to real-world AI projects.

Posted 7 hours ago

Apply

0.0 - 1.0 years

1 - 3 Lacs

Hyderabad

Work from Office

Naukri logo

Python Full Stack Developer Intern (Onsite Gachibowli, Hyderabad) Location: Onsite Vasavi Skycity, Gachibowli, Hyderabad Timings: US Shift (Night Shift) Stipend: 10,000 to 15,000 per month Cab Facility: Not Provided Internship Type: Full-time, Onsite Only Eligibility: Passed out in 2024 or before (2025 pass-outs will not be considered) Duration : 6 Months About the Role: We are looking for a highly skilled and self-driven Python Full Stack Developer Intern with strong hands-on coding abilities and in-depth technical understanding. This is not a training role we need contributors who can work independently on real-time development projects. Must-Have Requirements: Graduation Year: 2024 or before only Must have developed and deployed at least one complete dynamic web application or website independently (not an academic project) Deep technical knowledge in both frontend and backend development Excellent coding and debugging skills must be comfortable writing production-grade code Must be able to work independently without constant guidance Willingness to work onsite and during US hours Technical Skills Required: Backend: Python (Django / Flask / FastAPI) Frontend: HTML, CSS, JavaScript, React or Angular Database: PostgreSQL / MySQL / MongoDB Version Control: Git & GitHub Understanding of RESTful APIs , Authentication , and Security Practices Experience with deployment (Heroku, AWS, etc.) is a plus Nice-to-Have Skills: Knowledge of Docker, CI/CD pipelines Familiarity with cloud services (AWS/GCP) Exposure to Agile/Scrum methodology What You’ll Do: Work on real-time development projects from scratch Write clean, maintainable, and scalable code Collaborate with remote teams during US timings Independently handle assigned modules/features Continuously learn and adapt to new technologies Note: Academic/college projects will NOT be considered. Candidates must be able to show at least one independently built dynamic web app or website (with codebase and/or live demo).

Posted 18 hours ago

Apply

9.0 - 14.0 years

11 - 21 Lacs

Pune, Chennai, Bengaluru

Work from Office

Naukri logo

Hi, Urgent opening for AWS Python Data Engineer-Manager with EY GDS at Pan India Location. EXP :9-14Yrs Location: All GDS Mode: Hybrid Prefer Immediate joinees with 0-30 days of NP Mandatory Skills: Machine Learning models Flask Rest API Pandas, Numpy, AWS / Azure Python Please apply if Available for Virtual interview on 28th June 2025. Manager (9-14Yrs): https://careers.ey.com/job-invite/1600818/

Posted 1 day ago

Apply

9.0 - 14.0 years

30 - 40 Lacs

Hyderabad

Hybrid

Naukri logo

Position Overview: We are seeking an experienced Backend Developer proficient in Python, Flask, FastAPI, and related technologies with a deep understanding of algorithm design for complex tasks. As part of our backend engineering team, you will play a key role in designing, developing, and maintaining scalable and reliable backend services for our AI coaching platform. Your expertise in microservices architecture, cloud computing, and database management will be instrumental in shaping the future of our technology stack. Responsibilities: Design, develop, and maintain RESTful APIs and backend services using Python, Flask, FastAPI, and SQLAlchemy, adhering to best practices for code quality, performance, and scalability. Implement microservices architecture, for smaller, independent services, and orchestrate communication between services using message brokers or API gateways. Implement complex algorithms and data structures to handle diverse tasks such as data processing, operation research (OR), recommendation systems, and optimization problems. Optimize backend services for performance and efficiency, identifying bottlenecks and implementing solutions to improve response times and resource utilization. Collaborate with frontend developers, data scientists, and DevOps engineers to integrate backend services with web and mobile applications, AI models, and cloud infrastructure. Implement authentication and authorization mechanisms, ensuring secure access control to backend resources and protecting sensitive data using industry-standard encryption and authentication protocols. Utilize cloud computing platforms such as Google Cloud Platform (GCP) to deploy and scale backend services, leveraging managed services like Cloud Functions, Cloud Run, and Kubernetes Engine for optimal performance and cost efficiency. Containerize backend services using Docker and orchestration tools like Kubernetes for deployment and management in containerized environments, ensuring consistency and reproducibility across development, staging, and production environments. Design and optimize database schemas using PostgreSQL or MySQL, leveraging advanced features for scalability, performance, and data integrity, and integrating data processing libraries like Pandas and NumPy for advanced analytics and machine learning tasks. Document API specifications using OpenAPI (formerly Swagger), defining endpoints, request/response schemas, and authentication requirements for internal and external consumption. Qualifications: Bachelor's or Master's degree in Computer Science, Software Engineering, or related field. Extensive experience in backend development with Python, including frameworks like Flask and FastAPI, and proficiency in database management with SQLAlchemy. Strong understanding of microservices architecture principles and experience designing, implementing, and deploying microservices-based applications. Strong understanding of algorithmic complexity, optimization techniques, and best practices for designing efficient algorithms to solve complex problems. Hands-on experience with cloud computing platforms, preferably Google Cloud Platform (GCP), and familiarity with cloud-native technologies such as serverless computing, containers, and orchestration. Proficiency in containerization and orchestration tools like Docker and Kubernetes for building and managing scalable, distributed systems. Solid understanding of relational database management systems (RDBMS) such as PostgreSQL or MySQL, with experience optimizing database schemas for performance and scalability. Familiarity with data processing libraries like Pandas and NumPy for advanced analytics and machine learning tasks. Experience with API documentation tools like OpenAPI/Swagger for defining and documenting RESTful APIs. Excellent problem-solving skills, attention to detail, and ability to work effectively in a collaborative, cross-functional team environment. Strong communication skills, with the ability to articulate technical concepts and collaborate with stakeholders across disciplines. Passion for sports and a desire to make a positive impact on athlete performance and well-being. BenefitRole & responsibilities

Posted 2 days ago

Apply

1.0 - 3.0 years

7 - 10 Lacs

Mumbai Suburban, Mumbai (All Areas)

Work from Office

Naukri logo

Role & responsibilities Conduct relevant statistical tests to validate hypotheses, perform data cleaning, and run relevant statistical tests to validate findings. Perform Exploratory Data Analysis (EDA) to identify patterns, anomalies, and opportunities. Perform data fusion activities by leveraging techniques to merge, reconcile, and analyze information from disparate systems and formats. Identify and employ modern weighing and projection methods to answer key business questions and predict future trends. Support data visualizer with necessary data for real-time data visualization. Collaborate with product and research teams by providing feedback based on analytical findings and maintain daily MIS reports. Preferred candidate profile MSc in Statistics or a related quantitative field. Experience working with app-based data is preferred. Proficiency in Python is a must with exposure to libraries used for numerical and text analysis such as Pandas, Numpy, PySpark, NLTK, SpaCy, Scikit-Learn, Genism, etc. Expertise in MS Excel and dashboard creation to complement BI tools and automate reporting tasks. Strong analytical mindset to interpret complex data, identify trends, and provide actionable insights. Understanding of the business context to translate data insights into relevant recommendations and feedback for product and research teams. Benefits Competitive salary and benefits package. Opportunity to make significant contributions for a dynamic company. Evening snacks are provided by the company to keep you refreshed towards the end of the day. Walking distance from Chakala metro station, making commuting easy and convenient. At Axis My India, we value discipline and focus. Our team members wear uniforms, adhere to a no-mobile policy during work hours, and work from our office with alternate Saturdays off. If you thrive in a structured environment and are committed to excellence, we encourage you to apply.

Posted 2 days ago

Apply

3.0 - 6.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

1. Strong Python knowledge 2. Hands on knowledge with SQL queries(any database Postgres, MySQL, Oracle etc.) 3. Good understanding of Automated unit testing process in Python echo system 4. Need to prepare technical design documents as process

Posted 2 days ago

Apply

6.0 - 8.0 years

18 - 22 Lacs

Bangalore Rural, Chennai, Bengaluru

Work from Office

Naukri logo

Senior Python ETL Developer/Lead, 5+ years of exp in ETL development using Python,Apache Airflow, PySpark, and Pandas,Oracle SQL, PL/SQL,UNIX , Windows environments,OOAD, SOA,data warehousing concepts, data modeling, and data integration.

Posted 2 days ago

Apply

1.0 - 3.0 years

2 - 2 Lacs

Viluppuram

Work from Office

Naukri logo

1–3 years of experience in full-stack development using Python.Proficiency in Backend,Front end,Databases,Pompt Engineering,Devops,Testing

Posted 2 days ago

Apply

1.0 - 3.0 years

4 - 5 Lacs

Ahmedabad

Work from Office

Naukri logo

About Us: Founded in 2008, Red & White is Gujarats leading NSDC & ISO-certified institute, focused on industry-relevant education and global employability. Role Overview: Were hiring a faculty member to teach AI, Machine Learning, and Data Science. The role includes delivering lectures, guiding projects, mentoring students, and staying updated with tech trends. Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Masters/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in

Posted 2 days ago

Apply

2.0 - 4.0 years

5 - 12 Lacs

Bengaluru

Hybrid

Naukri logo

Primary Function Hands on in Python scripting language. A deep understanding of Data structures is a must. Basic Java Knowledge is required. Strong Analytical and Troubleshooting skills. Good communication skills and team player qualities. Passionate and self-motivated Willingness to learn and improve rapidly Good to have Familiarity with server-side templating languages including Jinja 2 and Mako. Good with testing tools and familiarity with TDD (Test Driven developement). Knowledge on any framework like selenium / requestium / Django / Flask is a plus. Knowledge on AWS, Docker, Gitlab, Numpy, Pandas, MySQL, Mongodb, ElasticSearch is a plus Domain experience in Fintech is preferred, but not mandatory. Familiarity with Go and Mongodb Qualifications & Competency Bachelor's degree in computer science, computer engineering, or related field. 2.5 to 4 years of experience as a Python developer Reporting to Technical Lead

Posted 2 days ago

Apply

5.0 - 10.0 years

30 - 40 Lacs

Gurugram

Work from Office

Naukri logo

Python Developer Location:Gurgaon Time Zone: Work from Office Duration: Full-time Requirements: - 57 years of backend development experience. - Refactoring legacy applications. - Strong ANSI SQL and DB interaction. - Experience with Git, CI/CD, and Agile methodologies. - Python 3.x, Pandas, SQLAlchemy, PyODBC. - Data validation, integration, and test automation scripting

Posted 2 days ago

Apply

2.0 - 3.0 years

10 - 18 Lacs

Chennai

Work from Office

Naukri logo

Roles and Responsibilities: Collect and curate data based on specific project requirements. Perform data cleaning, preprocessing, and transformation for model readiness. Select and implement appropriate data models for various applications. Continuously improve model accuracy through iterative learning and feedback loops. Fine-tune large language models (LLMs) for applications such as code generation and data handling. Apply geometric deep learning techniques using PyTorch or TensorFlow. Essential Requirements: Strong proficiency in Python, with experience in writing efficient and clean code. Ability to process and transform natural language data for NLP applications. Solid understanding of modern NLP techniques such as Transformers, Word2Vec, BERT, etc. Strong foundation in mathematics and statistics relevant to machine learning and deep learning. Hands-on experience with Python libraries including NumPy, Pandas, SciPy, Scikit-learn, NLTK, etc. Experience in various data visualization techniques using Python or other tools. Working knowledge of DBMS and fundamental data structures. Familiarity with a variety of ML and optimization algorithms.

Posted 2 days ago

Apply

4.0 - 8.0 years

12 - 22 Lacs

Jaipur

Remote

Naukri logo

Hi Folks I hope you all are doing well ! We are hiring for the one of the leading IT industry across the world for Sr, Data/Python Engineer role where we are looking for the person who is expertise in p ython, pandas/streamlit, SQL, Pyspark. Job Description: Job Title: Sr. Data/Python Engineer Location: Pan India- Remote Job Type: Full Time Job Summary: We are seeking a skilled and collaborative Sr. Data/Python Engineer with experience in the development of production Python-based applications (Such as Pandas, Numpy, Django, Flask, FastAPI on AWS) to support our data platform initiatives and application development. This role will initially focus on building and optimizing Streamlit application development frameworks, CI/CD Pipelines, ensuring code reliability through automated testing with Pytest , and enabling team members to deliver updates via CI/CD pipelines . Once the deployment framework is implemented, the Sr Engineer will own and drive data transformation pipelines in dbt and implement a data quality framework. Key Responsibilities: Lead application testing and productionalization of applications built on top of Snowflake - This includes implementation and execution of unit testing and integration testing - Automated test suites include use of Pytest and Streamlit App Tests to ensure code quality, data accuracy, and system reliability. Development and Integration of CI/CD pipelines (e.g., GitHub Actions, Azure DevOps, or GitLab CI) for consistent deployments across dev, staging, and production environments. Development and testing of AWS-based pipelines - AWS Glue, Airflow (MWAA), S3 Design, develop, and optimize data models and transformation pipelines in Snowflake using SQL and Python. Build Streamlit-based applications to enable internal stakeholders to explore and interact with data and models. Collaborate with team members and application developers to align requirements and ensure secure, scalable solutions. Monitor data pipelines and application performance, optimizing for speed, cost, and user experience. Create end-user technical documentation and contribute to knowledge sharing across engineering and analytics teams. Work in CST hours and collaborate with onshore and offshore teams. Required Skills and Experience: 4+ years of experience in Data Engineering or Python based application development on AWS (Pandas, Flask, Django, FastAPI, Streamlit) - Experience building data data-intensive applications on python as well as data pipelines on AWS in a must. Strong in python and Pandas Proficient in SQL and Python for data manipulation and automation tasks. Experience with developing and productionalizing applications built on Python based Frameworks such as FastAPI, Django, Flask (Strong Python Pandas,Flask, Django, FastAPI, or Streamlit experience Experience with application frameworks such as Streamlit, Angular, React etc for rapid data app deployment. Solid understanding of software testing principles and experience using Pytest or similar Python frameworks. Experience configuring and maintaining CI/CD pipelines for automated testing and deployment. Familiarity with version control systems such as Gitlab . Knowledge of data governance, security best practices, and role-based access control (RBAC) in Snowflake. Preferred Qualifications: Experience with dbt (data build tool) for transformation modeling. Knowledge of Snowflakes advanced features (e.g., masking policies, external functions, Snowpark). Exposure to cloud platforms (e.g., AWS, Azure, GCP). Strong communication and documentation skills. Interested Candidate share their resume on sweta@talentvidas.com

Posted 3 days ago

Apply

4.0 - 8.0 years

5 - 11 Lacs

Hyderabad

Hybrid

Naukri logo

We are seeking a skilled and detail-oriented Python + SQL Consultant to join our team. The ideal candidate will have strong expertise in data manipulation, analysis, and automation using Python and SQL. You will work closely with data engineers, analysts, and business stakeholders to deliver high-quality data solutions and insights. Key Responsibilities: Design, develop, and maintain data pipelines using Python and SQL. Write efficient, optimized SQL queries for data extraction, transformation, and reporting. Automate data workflows and integrate APIs or third-party services. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Perform data validation, cleansing, and quality checks. Develop dashboards or reports using BI tools (optional, if applicable). Document processes, code, and data models for future reference. Required Skills & Qualifications: Strong proficiency in Python (Pandas, NumPy, etc.). Advanced knowledge of SQL (joins, subqueries, CTEs, window functions). Experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Familiarity with version control systems like Git. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Preferred Qualifications: Experience with cloud platforms (AWS, Azure, GCP). Familiarity with data visualization tools (e.g., Power BI, Tableau). Knowledge of ETL tools or frameworks (e.g., Airflow, dbt). Background in data warehousing or big data technologies. Education: Bachelors or Master’s degree in Computer Science, Information Systems, Engineering, or a related field.

Posted 3 days ago

Apply

Exploring Pandas Jobs in India

The job market for pandas professionals in India is on the rise as more companies are recognizing the importance of data analysis and manipulation in making informed business decisions. Pandas, a popular Python library for data manipulation and analysis, is a valuable skill sought after by many organizations across various industries in India.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for pandas roles: 1. Bangalore 2. Mumbai 3. Delhi 4. Hyderabad 5. Pune

Average Salary Range

The average salary range for pandas professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from ₹4-6 lakhs per annum, while experienced professionals can earn upwards of ₹12-18 lakhs per annum.

Career Path

Career progression in the pandas domain typically involves moving from roles such as Junior Data Analyst or Data Scientist to Senior Data Analyst, Data Scientist, and eventually to roles like Tech Lead or Data Science Manager.

Related Skills

In addition to pandas, professionals in this field are often expected to have knowledge or experience in the following areas: - Python programming - Data visualization tools like Matplotlib or Seaborn - Statistical analysis - Machine learning algorithms

Interview Questions

Here are 25 interview questions for pandas roles: - What is pandas in Python? (basic) - Explain the difference between Series and DataFrame in pandas. (basic) - How do you handle missing data in pandas? (basic) - What are the different ways to create a DataFrame in pandas? (medium) - Explain groupby() in pandas with an example. (medium) - What is the purpose of pivot_table() in pandas? (medium) - How do you merge two DataFrames in pandas? (medium) - What is the significance of the inplace parameter in pandas functions? (medium) - What are the advantages of using pandas over Excel for data analysis? (advanced) - Explain the apply() function in pandas with an example. (advanced) - How do you optimize performance in pandas operations for large datasets? (advanced) - What is method chaining in pandas? (advanced) - Explain the working of the cut() function in pandas. (medium) - How do you handle duplicate values in a DataFrame using pandas? (medium) - What is the purpose of the nunique() function in pandas? (medium) - How can you handle time series data in pandas? (advanced) - Explain the concept of multi-indexing in pandas. (advanced) - How do you filter rows in a DataFrame based on a condition in pandas? (medium) - What is the role of the read_csv() function in pandas? (basic) - How can you export a DataFrame to a CSV file using pandas? (basic) - What is the purpose of the describe() function in pandas? (basic) - How do you handle categorical data in pandas? (medium) - Explain the role of the loc and iloc functions in pandas. (medium) - How do you perform text data analysis using pandas? (advanced) - What is the significance of the to_datetime() function in pandas? (medium)

Prepare and Apply Confidently

As you explore pandas jobs in India, remember to enhance your skills, stay updated with industry trends, and practice answering interview questions to increase your chances of securing a rewarding career in data analysis. Best of luck on your job search journey!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies