Jobs
Interviews

1603 Pandas Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

The role of warehousing and logistics systems is becoming increasingly crucial in enhancing the competitiveness of various companies and contributing to the overall efficiency of the global economy. Modern intra-logistics solutions integrate cutting-edge mechatronics, sophisticated software, advanced robotics, computational perception, and AI algorithms to ensure high throughput and streamlined processing for critical commercial logistics functions. Our Warehouse Execution Software is designed to optimize intralogistics and warehouse automation by utilizing advanced optimization techniques. By synchronizing discrete logistics processes, we have created a real-time decision engine that maximizes labor and equipment efficiency. Our software empowers customers with operational agility essential for meeting the demands of an Omni-channel environment. We are seeking a dynamic individual who can develop state-of-the-art MLOps and DevOps frameworks for AI model deployment. The ideal candidate should possess expertise in cloud technologies, deployment architectures, and software production standards. Moreover, effective collaboration within interdisciplinary teams is key to successfully guiding products through the development cycle. **Core Job Responsibilities:** - Develop comprehensive pipelines covering the ML lifecycle from data ingestion to model evaluation. - Collaborate with AI scientists to expedite the operationalization of ML algorithms. - Establish CI/CD/CT pipelines for ML algorithms. - Implement model deployment both in cloud and on-premises edge environments. - Lead a team of DevOps/MLOps engineers. - Stay updated on new tools, technologies, and industry best practices. **Key Qualifications:** - Master's degree in Computer Science, Software Engineering, or a related field. - Proficiency in Cloud Platforms, particularly GCP, and relevant skills like Docker, Kubernetes, and edge computing. - Familiarity with task orchestration tools such as MLflow, Kubeflow, Airflow, Vertex AI, and Azure ML. - Strong programming skills, preferably in Python. - Robust DevOps expertise including Linux/Unix, testing, automation, Git, and build tools. - Knowledge of data engineering tools like Beam, Spark, Pandas, SQL, and GCP Dataflow is advantageous. - Minimum 5 years of experience in relevant fields, including academic exposure. - At least 3 years of experience in managing a DevOps/MLOps team.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Data Scientist at Setu, you will play a crucial role in shaping the future of fintech by leveraging data science to understand the diverse fintech landscape in India. You will have the opportunity to explore, experiment, and develop critical and high-value use cases that contribute to the innovation and growth of multiple tech-enabled businesses. By joining our team, you will not only be keeping up with innovation but also setting the pace by introducing groundbreaking methods that enhance customer experiences and streamline operational processes. Your responsibilities will include demonstrating dedication and high ownership towards Setu's mission and vision, consistently creating value for customers and stakeholders, ensuring excellence in all aspects of work, efficiently managing time and tasks, and adapting quickly to new roles and responsibilities. To excel in this role, you should have at least 2 years of experience in Data Science/Machine Learning, strong knowledge in statistics and various machine learning techniques, proficiency in Python programming, experience in building Data Pipelines, and proficiency in tools like pandas, sci-kit-learn, SQL, and TensorFlow/PyTorch. At Setu, we are committed to empowering you to do impactful work by providing opportunities to collaborate closely with the founding team, access to learning and development resources, comprehensive health insurance, mental health support, and a diverse and inclusive work environment. Our core culture code, "How We Move", emphasizes behaviors such as quick decision-making, mastery of skills, leadership, taking ownership, empowering others, and innovation for the customer and beyond. Join us at Setu if you are passionate about making a tangible difference in the fintech landscape and contributing to financial inclusion by embracing an audacious mission and a commitment to craftsmanship in code.,

Posted 1 week ago

Apply

1.0 - 7.0 years

0 Lacs

maharashtra

On-site

We are seeking an experienced AI Data Analyst with over 7 years of professional experience, showcasing leadership in tech projects. The ideal candidate will possess a strong proficiency in Python, Machine Learning, AI APIs, and Large Language Models (LLMs). You will have the opportunity to work on cutting-edge AI solutions, including vector-based search and data-driven business insights. Your experience should include: - At least 2 years of hands-on experience as a Data Analyst. - Practical experience of at least 1 year with AI systems such as LLMs, AI APIs, or vector-based search. - 2+ years of experience working with Machine Learning models and solutions. - Strong background of 5+ years in Python programming. - Exposure to vector databases like pgvector and ChromaDB is considered a plus. Key Responsibilities: - Conduct data exploration, profiling, and cleaning on large datasets. - Design, implement, and evaluate machine learning and AI models to address business problems. - Utilize LLM APIs, foundation models, and vector databases to support AI-driven analysis. - Construct end-to-end ML workflows starting from data preprocessing to deployment. - Develop visualizations and dashboards for internal reports and presentations. - Analyze and interpret model outputs, providing actionable insights to stakeholders. - Collaborate with engineering and product teams to implement AI solutions across business processes. Required Skills: Data Analysis: - Work with real-world datasets hands-on for at least 1 year. - Proficiency in Exploratory Data Analysis (EDA), data wrangling, and visualization using tools like Pandas, Seaborn, or Plotly. Machine Learning & AI: - Apply machine learning techniques for at least 2 years (classification, regression, clustering, etc.). - Hands-on experience with AI technologies such as Generative AI, LLMs, AI APIs (e.g., OpenAI, Hugging Face), and vector-based search systems. - Knowledge of model evaluation, hyperparameter tuning, and model selection. - Exposure to AI-driven analysis, including RAG (Retrieval-Augmented Generation) and other AI solution architectures. Programming: - Proficiency in Python programming for at least 3 years, with expertise in libraries like scikit-learn, NumPy, Pandas, etc. - Strong understanding of data structures and algorithms relevant to AI and ML. Tools & Technologies: - Proficiency in SQL/PostgreSQL. - Familiarity with vector databases like pgvector, ChromaDB. - Exposure to LLMs, foundation models, RAG systems, and embedding techniques. - Familiarity with cloud platforms such as AWS, SageMaker, or similar. - Knowledge of version control systems (e.g., Git), REST APIs, and Linux. Good to Have: - Experience with tools like Scrapy, SpaCy, or OpenCV. - Knowledge of MLOps, model deployment, and CI/CD pipelines. - Familiarity with deep learning frameworks like PyTorch or TensorFlow. Soft Skills: - Possess a strong problem-solving mindset and analytical thinking. - Excellent communication skills, able to convey technical information clearly to non-technical stakeholders. - Collaborative, proactive, and self-driven in a fast-paced, dynamic environment. If you meet the above requirements and are eager to contribute to a dynamic team, share your resume with kajal.uklekar@arrkgroup.com. We look forward to welcoming you to our team in Mahape, Navi Mumbai for a hybrid work arrangement. Immediate joiners are preferred.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Scientist Intern at Evoastra Ventures Pvt. Ltd., you will have the opportunity to work with real-world datasets and gain valuable industry exposure to accelerate your entry into the data science domain. Evoastra Ventures is a research-first data and AI solutions company that focuses on delivering value through predictive analytics, market intelligence, and technology consulting. Our goal is to empower businesses by transforming raw data into strategic decisions. In this role, you will be responsible for performing data cleaning, preprocessing, and transformation, as well as conducting exploratory data analysis (EDA) to identify trends. You will also assist in the development and evaluation of machine learning models and contribute to reports and visual dashboards summarizing key insights. Additionally, you will document workflows, collaborate with team members on project deliverables, and participate in regular project check-ins and mentorship discussions. To excel in this role, you should have a basic knowledge of Python, statistics, and machine learning concepts, along with good analytical and problem-solving skills. You should also be willing to learn and adapt in a remote, team-based environment, possess strong communication and time-management skills, and have access to a laptop with a stable internet connection. Throughout the internship, you will gain a Verified Internship Certificate, a Letter of Recommendation based on your performance, real-time mentorship from professionals in data science and analytics, project-based learning opportunities with portfolio-ready outputs, and priority consideration for future paid internships or full-time roles at Evoastra. You will also be recognized in our internship alumni community. If you meet the eligibility criteria and are eager to build your career foundation with hands-on data science projects that make an impact, we encourage you to submit your resume via our internship application form at www.evoastra.in. Selected candidates will receive an onboarding email with further steps. Please note that this internship is fully remote and unpaid.,

Posted 1 week ago

Apply

2.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Data Scientist with our fast-growing team, you should possess a total of 7-8 years of experience, with a specific focus on 3-5 years in Machine & Deep Machine learning. Your expertise should include working with Convolution Neural Network (CNN), Image Analytics, TensorFlow, Open CV, among others. Your primary responsibilities will revolve around designing and developing highly scalable machine learning solutions that have a significant impact on various aspects of our business. You will play a crucial role in creating Neural Network solutions, particularly Convolution Neural Networks, and ML solutions based on our architecture supported by big data, cloud technology, micro-service architecture, and high-performing compute infrastructure. Your daily tasks will involve contributing to all stages of algorithm development, from ideation to design, prototyping, and production implementation. To excel in this role, you should have a solid foundation in software engineering and data science, along with a deep understanding of machine learning algorithms, statistical analysis tools, and distributed systems. Experience in developing machine learning applications, familiarity with various machine learning APIs, tools, and open source libraries, as well as proficiency in coding, data structures, predictive modeling, and big data concepts are essential. Additionally, expertise in designing full-stack ML solutions in a distributed compute environment is crucial. Proficiency in Python, Tensor Flow, Keras, Sci-kit, pandas, NumPy, AZURE, AWS GPU is required. Strong communication skills to effectively collaborate with various levels of the organization are also necessary. If you are a Junior Data Scientist looking to join our team, you should have 2-4 years of experience and hands-on experience in Deep Learning, Computer Vision, Image Processing, and related skills. We are seeking self-motivated individuals who are eager to tackle challenges in the realm of AI predictive image analytics and machine learning.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Visualization Engineer at Zoetis, Inc., you will be an integral part of the pharmaceutical R&D team, contributing to the development and implementation of cutting-edge visualizations that drive decision-making in drug discovery, development, and clinical research. You will collaborate closely with scientists, analysts, and other stakeholders to transform complex datasets into impactful visual narratives that provide key insights and support strategic initiatives. Your responsibilities will include designing and developing a variety of visualizations, ranging from interactive dashboards to static reports, to summarize key insights from high-throughput screening, clinical trial data, and other R&D datasets. You will work on implementing visual representations for pathway analysis, pharmacokinetics, omics data, and time-series trends, utilizing advanced visualization techniques and tools to create compelling visuals tailored to technical and non-technical audiences. Collaboration with cross-functional teams will be a key aspect of your role, as you partner with data scientists, bioinformaticians, pharmacologists, and clinical researchers to identify visualization needs and translate scientific data into actionable insights. Additionally, you will be responsible for maintaining and optimizing visualization tools, building reusable components, and evaluating emerging technologies to support large-scale data analysis. Staying updated on the latest trends in visualization technology and methods relevant to pharmaceutical research will be essential, as you apply advanced techniques such as 3D molecular visualization, network graphs, and predictive modeling visuals. You will also collaborate across the full spectrum of R&D functions, aligning technology solutions with the diverse needs of scientific disciplines and development pipelines. In terms of qualifications, you should possess a Bachelor's or Master's degree in Computer Science, Data Science, Bioinformatics, or a related field. Experience in the pharmaceutical or biotech sectors is considered a strong advantage. Proficiency in visualization tools such as Tableau, Power BI, and programming languages like Python, R, or JavaScript is required. Familiarity with data handling tools, omics and network tools, as well as dashboarding and 3D visualization tools, will also be beneficial. Soft skills such as strong storytelling ability, effective communication, collaboration with interdisciplinary teams, and analytical thinking are crucial for success in this role. Travel requirements for this position are minimal, ranging from 0-10%. Join us at Zoetis India Capability Center (ZICC) in Hyderabad, and be part of our journey to pioneer innovation and drive the future of animal healthcare.,

Posted 1 week ago

Apply

3.0 - 10.0 years

0 Lacs

telangana

On-site

The U.S. Pharmacopeial Convention (USP) is an independent scientific organization collaborating with top health and science authorities to develop quality standards for medicines, dietary supplements, and food ingredients. With over 1,300 professionals across twenty global locations, USP's core value of Passion for Quality drives its mission to strengthen the supply of safe, quality medicines and supplements worldwide. USP values inclusivity and fosters opportunities for mentorship and professional growth. Emphasizing Diversity, Equity, Inclusion, and Belonging, USP aims to build a world where quality in health and healthcare is assured. The Digital & Innovation group at USP is seeking a Data Scientist proficient in advanced analytics, data visualization, and machine learning to work on innovative projects and deliver digital solutions. The ideal candidate will leverage data insights to create a unified experience across USP's ecosystem. In this role, you will contribute to USP's public health mission by increasing access to high-quality, safe medicine and improving global health through public standards and programs. Collaborate with data scientists, engineers, and IT teams to ensure project success, apply ML techniques for business impact, and communicate results effectively to diverse audiences. **Requirements:** **Education:** Bachelor's degree in relevant field (e.g., Engineering, Analytics, Data Science, Computer Science, Statistics) or equivalent experience. **Experience:** - Data Scientist: 3-6 years hands-on experience in data science, machine learning, statistics, and natural language processing. - Senior Data Scientist: 6-10 years hands-on experience in data science and advanced analytics. - Proficiency in Python packages and visualization tools, SQL, and CNN/RNN models. - Experience with data extraction, XML documents, and DOM model. **Additional Preferences:** - Master's degree in relevant fields. - Experience in scientific chemistry or life sciences. - Familiarity with pharmaceutical datasets and nomenclature. - Ability to translate stakeholder needs into technical outputs. - Strong communication skills and ability to explain technical issues to non-technical audiences. **Supervisory Responsibilities:** Non-supervisory position **Benefits:** USP offers comprehensive benefits for personal and financial well-being, including healthcare options and retirement savings. USP does not accept unsolicited resumes from third-party recruitment agencies. Job Type: Full-Time.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Senior Data Scientist I at Dotdash Meredith, you will collaborate with the business team to understand problems, objectives, and desired outcomes. Your primary responsibility will be to work with cross-functional teams to assess data science use cases & solutions, lead and execute end-to-end data science projects, and collaborate with stakeholders to ensure alignment of data solutions with business goals. You will be expected to build custom data models with an initial focus on content classification, utilize advanced machine learning techniques to improve model accuracy and performance, and build necessary visualizations to interpret data models by business teams. Additionally, you will work closely with the engineering team to integrate models into production systems, monitor model performance in production, and make improvements as necessary. To excel in this role, you must possess a Masters degree (or equivalent experience) in Data Science, Mathematics, Statistics, or a related field with 3+ years of experience in ML/Data Science/Predictive-Analytics. Strong programming skills in Python and experience with standard data science tools and libraries are essential. Experience or understanding of deploying machine learning models in production on at least one cloud platform is required, and hands-on experience with LLMs API and the ability to craft effective prompts are preferred. It would be beneficial to have experience in the Media domain, familiarity with vector databases like Milvus, and E-commerce or taxonomy classification experience. In this role, you will have the opportunity to learn about building ML models using industry-standard frameworks, solving Data Science problems for the media industry, and the use of Gen AI in Media. This position is based in Eco World, Bengaluru, with shift timings from 1 p.m. to 10 p.m. IST. If you are a bright, engaged, creative, and fun individual with a passion for data science, we invite you to join our inspiring team at Dotdash Meredith India Services Pvt. Ltd.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

You have 2-4 years of experience and can join within Immediate to 30 days. As a Python Developer, you will play a key role in developing and maintaining efficient server-side applications. You will need to optimize code for performance and scalability, collaborate with front-end developers for seamless integration, and work with Pandas and Numpy for data processing. Additionally, deploying applications and ensuring performance monitoring will be part of your responsibilities. Your proficiency in Python and related frameworks like Django, Flask, or FastAPI is crucial for this role. You should also have knowledge of ORM libraries and database systems, along with familiarity in JavaScript, HTML, and CSS. Strong debugging and problem-solving skills are essential for success in this position. Experience in automation and CI/CD pipelines would be a plus. If you are passionate about Python development and enjoy working in a dynamic team, we would like to hear from you.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

salem, tamil nadu

On-site

This is a key position that will play a pivotal role in creating data-driven technology solutions to establish our client as a leader in healthcare, financial, and clinical administration. As the Lead Data Scientist, you will be instrumental in building and implementing machine learning models and predictive analytics solutions that will spearhead the new era of AI-driven innovation in the healthcare industry. Your responsibilities will involve developing and implementing a variety of ML/AI products, from conceptualization to production, to help the organization gain a competitive edge in the market. Working closely with the Director of Data Science, you will operate at the crossroads of healthcare, finance, and cutting-edge data science to tackle some of the most intricate challenges faced by the industry. This role presents a unique opportunity within VHT's Product Transformation division to create pioneering machine learning capabilities from scratch. You will have the chance to shape the future of VHT's data science & analytics foundation, utilizing state-of-the-art tools and methodologies within a collaborative and innovation-focused environment. Key Responsibilities: - Lead the development of predictive machine learning models for Revenue Cycle Management analytics, focusing on areas such as: - Claim Denials Prediction: identifying high-risk claims before submission - Cash Flow Forecasting: predicting revenue timing and patterns - Patient-Related Models: enhancing patient financial experience and outcomes - Claim Processing Time Prediction: optimizing workflow and resource allocation - Explore emerging areas and integration opportunities, e.g., denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. VHT Technical Environment: - Cloud Platform: AWS (SageMaker, S3, Redshift, EC2) - Development Tools: Jupyter Notebooks, Git, Docker - Programming: Python, SQL, R (optional) - ML/AI Stack: Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow - Data Processing: Spark, Pandas, NumPy - Visualization: Matplotlib, Seaborn, Plotly, Tableau Required Qualifications: - Advanced degree in Data Science, Statistics, Computer Science, Mathematics, or a related quantitative field - 5+ years of hands-on data science experience with a proven track record of deploying ML models to production - Expert-level proficiency in SQL and Python, with extensive experience using standard Python machine learning libraries (scikit-learn, pandas, numpy, matplotlib, seaborn, etc.) - Cloud platform experience, preferably AWS, with hands-on knowledge of SageMaker, S3, Redshift, and Jupyter Notebook workbenches (other cloud environments acceptable) - Strong statistical modeling and machine learning expertise across supervised and unsupervised learning techniques - Experience with model deployment, monitoring, and MLOps practices - Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders Preferred Qualifications: - US Healthcare industry experience, particularly in Health Insurance and/or Medical Revenue Cycle Management - Experience with healthcare data standards (HL7, FHIR, X12 EDI) - Knowledge of healthcare regulations (HIPAA, compliance requirements) - Experience with deep learning frameworks (TensorFlow, PyTorch) - Familiarity with real-time streaming data processing - Previous leadership or mentoring experience,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

surat, gujarat

On-site

As an experienced professional with over 2 years of experience, you will be responsible for designing, developing, and deploying Machine Learning / Artificial Intelligent models to address real-world challenges within our software products. Your role will involve collaborating with product managers, developers, and data engineers to establish AI project objectives and specifications. Additionally, you will be tasked with cleaning, processing, and analyzing extensive datasets to uncover valuable patterns and insights. Your expertise will be utilized in implementing and refining models using frameworks like TensorFlow, PyTorch, or Scikit-learn. You will also play a crucial role in creating APIs and services to seamlessly integrate AI models into production environments. Monitoring model performance and conducting retraining as necessary to uphold accuracy and efficiency will be part of your regular duties. It is essential to keep yourself abreast of the latest developments in AI/ML and assess their relevance to our projects. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Data Science, AI/ML, or a related discipline. A strong grasp of machine learning algorithms, including supervised, unsupervised, and reinforcement learning, is imperative. Proficiency in Python and ML libraries such as NumPy, pandas, TensorFlow, Keras, and PyTorch is required. Familiarity with NLP, computer vision, or time-series analysis will be advantageous. Experience with model deployment tools and cloud platforms like AWS, GCP, or Azure is preferred. Knowledge of software engineering practices, encompassing version control (Git), testing, and CI/CD, is also essential. Candidates with prior experience in a product-based or tech-driven startup environment, exposure to deep learning, recommendation systems, or predictive analytics, and an understanding of ethical AI practices and model interpretability will be highly regarded. This is a full-time position requiring you to work during day shifts at our in-person work location.,

Posted 1 week ago

Apply

9.0 - 13.0 years

0 Lacs

chennai, tamil nadu

On-site

You are seeking a Lead - Python Developer / Tech Lead to take charge of backend development and oversee a team handling enterprise-grade, data-driven applications. In this role, you will have the opportunity to work with cutting-edge technologies such as FastAPI, Apache Spark, and Lakehouse architectures. Your responsibilities will include leading the team, making technical decisions, and ensuring timely project delivery in a dynamic work environment. Your primary duties will involve mentoring and guiding a group of Python developers, managing task assignments, maintaining code quality, and overseeing technical delivery. You will be responsible for designing and implementing scalable RESTful APIs using Python and FastAPI, as well as managing extensive data processing tasks using Pandas, NumPy, and Apache Spark. Additionally, you will drive the implementation of Lakehouse architectures and data pipelines, conduct code reviews, enforce coding best practices, and promote clean, testable code. Collaboration with cross-functional teams, including DevOps and Data Engineering, will be essential. Furthermore, you will be expected to contribute to CI/CD processes, operate in Linux-based environments, and potentially work with Kubernetes or MLOps tools. To excel in this role, you should possess 9-12 years of total experience in software development, with a strong command of Python, FastAPI, and contemporary backend frameworks. A profound understanding of data engineering workflows, Spark, and distributed systems is crucial. Experience in leading agile teams or fulfilling a tech lead position is beneficial. Proficiency in unit testing, Linux, and working in cloud/data environments is required, while exposure to Kubernetes, ML Pipelines, or MLOps would be advantageous.,

Posted 1 week ago

Apply

0.0 - 3.0 years

0 Lacs

guwahati, assam

On-site

You will be a Machine Learning Engineer responsible for assisting in the development and deployment of machine learning models and data systems. This entry-level position offers an opportunity to apply your technical skills to real-world challenges and collaborate within a team environment. Your responsibilities will include: - Assisting in the design, training, and optimization of machine learning models. - Supporting the development of scalable data pipelines for machine learning workflows. - Conducting exploratory data analysis and data preprocessing tasks. - Collaborating with senior engineers and data scientists to implement solutions. - Testing and validating machine learning models for accuracy and efficiency. - Documenting workflows, processes, and key learnings. You should possess: - A Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. - Basic proficiency in Python and familiarity with libraries such as NumPy, Pandas, Scikit-learn, and Tensorflow. - Knowledge of SQL and fundamental machine learning concepts and algorithms. - Exposure to cloud platforms like AWS, GCP, or Azure would be advantageous. Additionally, you should have: - 0-1 years of experience in data engineering or related fields. - Strong analytical skills and the ability to troubleshoot complex issues. - Leadership skills to guide junior team members and contribute to team success. Preferred qualifications include: - A Bachelor's degree in Computer Science, Information Technology, or a related field. - Proficiency in Scikit-learn, Pytorch, and Tensorflow. - Basic understanding of containerization tools like Docker. - Exposure to data visualization tools or frameworks. Your key performance indicators will involve demonstrating progress in applying machine learning concepts and successfully completing tasks within specified timelines.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

Are you passionate about web scraping and ready to take on exciting data-driven projects Actowiz Solutions is urgently hiring a skilled Senior Python Developer to join our dynamic team in Ahmedabad! You should have a minimum of 2+ years of experience in Python development, with a strong hands-on experience with the Scrapy framework. A deep understanding of XPath/CSS selectors, middleware & pipelines is essential for this role. Experience in handling CAPTCHAs, IP blocks, and JS-rendered content is also required. To excel in this role, you should be familiar with proxy rotation, user-agent switching, and headless browsers. Proficiency in working with data formats such as JSON, CSV, and databases is a must. Hands-on experience with Scrapy Splash / Selenium is highly desirable. Additionally, having good knowledge of Pandas, Docker, AWS, and Celery will be beneficial for this position. If you are enthusiastic about working on global data projects and are looking to join a fast-paced team, we would love to hear from you! To apply for this position, please send your resume to hr@actowizsolutions.com / aanchalg.actowiz@gmail.com or contact HR at 8200674053 / 8401366964. You can also DM us directly if you are interested in this opportunity. Feel free to like, share, or tag someone who might be a good fit for this role! Join us at Actowiz Solutions and be a part of our exciting journey in the field of Python development and web scraping. #PythonJobs #WebScraping #Scrapy #ImmediateJoiner #AhmedabadJobs #PythonDeveloper #DataJobs #ActowizSolutions,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a Data Engineer at our company, you will be responsible for building and maintaining scalable data pipelines and ETL processes using Python and related technologies. Your primary focus will be on developing efficient data pipelines to handle large volumes of data and optimize processing times. Additionally, you will collaborate closely with our team of data scientists and engineers at Matrix Space. To qualify for this role, you should have 2-5 years of experience in data engineering or a related field, with a strong proficiency in Python programming. You must be well-versed in libraries such as Pandas, NumPy, and SQL Alchemy, and have hands-on experience with data engineering tools like Apache Airflow, Luigi, or similar frameworks. A working knowledge of SQL and experience with relational databases such as PostgreSQL or MySQL is also required. In addition to technical skills, we are looking for candidates with strong problem-solving abilities who can work both independently and as part of a team. Effective communication skills are essential, as you will be required to explain technical concepts to non-technical stakeholders. The ability to complete tasks efficiently and effectively is a key trait we value in potential candidates. If you are an immediate joiner and can start within a week, we encourage you to apply for this opportunity. Join our team and be a part of our exciting projects in data engineering.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

karnataka

On-site

As a Deep Learning Engineer Intern, you will have the opportunity to work on our core ML stack, focusing on areas such as Computer Vision, Recommendation Systems, and LLM-based Features (optional). Working closely with our senior engineers, you will gain hands-on experience with real data, address edge-case challenges, and rapidly iterate on prototypes that drive our food-robotic machine. You will be an integral part of a focused 2-person full-time ML team, serving as our first intern and reporting directly to the Lead. This small team size ensures significant mentorship opportunities and allows you to make a direct impact on product development. Your responsibilities will include: - Working on tasks such as Ingredient Segmentation, Recipe Similarity, and Vision-Language Models - Packaging and deploying models for on-device inference and AWS services - Supporting data collection and annotation, refining datasets based on model performance - Iterating on experiments, tuning hyperparameters, testing augmentations, and documenting results We are looking for candidates with: - Strong fundamentals in Deep Learning (Transformers, CNNs) and classical ML (Logistic / Linear regression) - Hands-on experience with PyTorch in academic projects, internships, or personal work - Proficiency in Python and the broader ML ecosystem (NumPy, pandas) - Solid understanding of training pipelines, evaluation metrics, and experimental rigor - Eagerness to learn new domains such as NLP and recommendations Nice to have skills include prior exposure to LLMs or multimodal models, experience with computer vision in challenging real-world environments, and projects involving noisy, real-world datasets or edge-case-heavy scenarios. Working with us, you will enjoy: - Early ownership of ML features, contributing end-to-end to projects - Hands-on experience with real robotic systems in homes, witnessing your models in action - Direct access to senior ML engineers for mentorship in a small, focused team environment - Rapid feedback loops that allow you to prototype, deploy, and learn quickly - A clear growth path with strong potential for full-time conversion into our rapidly expanding ML team We are looking for candidates who are currently enrolled in a B.Tech or M.Tech program (3-4 years into their degree), possess strong math and ML fundamentals beyond using off-the-shelf libraries, thrive in an ambiguous, fast-paced startup environment, and are always curious, enjoying building, experimenting, and sharing insights. Location: Bangalore (4 days WFO & 1 day WFH) Duration: 3-6 month internship Compensation: 1,00,000/month To apply, ensure your resume includes a project portfolio with the following: - GitHub profile showcasing ML/deep learning projects - Public codebase from coursework, internships, or personal work - Kaggle notebooks or competition solutions with leaderboard standings,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Python Backend Engineer specializing in AWS with a focus on GenAI & ML, you will be responsible for designing, developing, and maintaining intelligent backend systems and AI-driven applications. Your primary objective will be to build and scale backend systems while integrating AI/ML models using Django or FastAPI. You will deploy machine learning and GenAI models with frameworks like TensorFlow, PyTorch, or Scikit-learn, and utilize Langchain for GenAI pipelines. Experience with LangGraph will be advantageous in this role. Collaboration with data scientists, DevOps, and architects is essential to integrate models into production. You will be working with AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Additionally, managing CI/CD pipelines for backend and model deployments will be a key part of your responsibilities. Ensuring the performance, scalability, and security of applications in cloud environments will also fall under your purview. To be successful in this role, you should have at least 5 years of hands-on experience in Python backend development and a strong background in building RESTful APIs using Django or FastAPI. Proficiency in AWS cloud services is crucial, along with a solid understanding of ML/AI concepts and model deployment practices. Familiarity with ML libraries like TensorFlow, PyTorch, or Scikit-learn is required, as well as experience with Langchain for GenAI applications. Experience with DevOps tools such as Docker, Kubernetes, Git, Jenkins, and Terraform will be beneficial. An understanding of microservices architecture, CI/CD workflows, and agile development practices is also desirable. Nice to have skills include knowledge of LangGraph, LLMs, embeddings, and vector databases, as well as exposure to OpenAI APIs, AWS Bedrock, or similar GenAI platforms. Additionally, familiarity with MLOps tools and practices for model monitoring, versioning, and retraining will be advantageous. This is a full-time, permanent position with benefits such as health insurance and provident fund. The work location is in-person, and the schedule involves day shifts from Monday to Friday in the morning. If you are interested in this opportunity, please contact the employer at +91 9966550640.,

Posted 1 week ago

Apply

9.0 - 13.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 9+ years of experience and be located in Chennai. You must possess in-depth knowledge of Python and have good experience in creating APIs using FastAPI. It is essential to have exposure to data libraries like Pandas, DataFrame, NumPy etc., as well as knowledge in Apache open-source components. Experience with Apache Spark, Lakehouse architecture, and Open table formats is required. You should also have knowledge in automated unit testing, preferably using PyTest, and exposure in distributed computing. Experience working in a Linux environment is necessary, and working knowledge in Kubernetes would be an added advantage. Basic exposure to ML and MLOps would also be advantageous.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

You will be responsible for working with MS SQL and Python, particularly with Pandas library. Your main tasks will include utilizing SQL Alchemy for data manipulation and providing production support. The ideal candidate should have strong skills in MS SQL and Python, as well as experience with Pandas and SQL Alchemy. A notice period of 0-30 days is required for this position. Candidates with any graduate degree can apply for this role. The job location is flexible and can be in Bangalore, Pune, Mumbai, Hyderabad, Chennai, Gurgaon, or Noida. For applying, please send your resume to [career@krazymantra.com](mailto:career@krazymantra.com).,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Data Engineer at our Bangalore office, you will play a crucial role in developing data pipeline solutions to meet business data needs. Your responsibilities will involve designing, implementing, and maintaining structured and semi-structured data models, utilizing Python and SQL for data collection, enrichment, and cleansing. Additionally, you will create data APIs in Python Flask containers, leverage AI for analytics, and build data visualizations and dashboards using Tableau. Your expertise in infrastructure as code (Terraform) and executing automated deployment processes will be vital for optimizing solutions for costs and performance. You will collaborate with business analysts to gather stakeholder requirements and translate them into detailed technical specifications. Furthermore, you will be expected to stay updated on the latest technical advancements, particularly in the field of GenAI, and recommend changes based on the evolving landscape of Data Engineering and AI. Your ability to embrace change, share knowledge with team members, and continuously learn will be essential for success in this role. To qualify for this position, you should have at least 5 years of experience in data engineering, with a focus on Python programming, data pipeline development, and API design. Proficiency in SQL, hands-on experience with Docker, and familiarity with various relational and NoSQL databases are required. Strong knowledge of data warehousing concepts, ETL processes, and data modeling techniques is crucial, along with excellent problem-solving skills and attention to detail. Experience with cloud-based data storage and processing platforms like AWS, GCP, or Azure is preferred. Bonus skills such as being a GenAI prompt engineer, proficiency in Machine Learning technologies like TensorFlow or PyTorch, knowledge of big data technologies, and experience with data visualization tools like Tableau, Power BI, or Looker will be advantageous. Familiarity with Pandas, Spacy, NLP libraries, agile development methodologies, and optimizing data pipelines for costs and performance are also desirable. Effective communication and collaboration skills in English are essential for interacting with technical and non-technical stakeholders. You should be able to translate complex ideas into simple examples to ensure clear understanding among team members. A bachelor's degree in computer science, IT, engineering, or a related field is required, along with relevant certifications in BI, AI, data engineering, or data visualization tools. The role will be based at The Leela Office on Airport Road, Kodihalli, Bangalore, with a hybrid work schedule allowing you to work from the office on Tuesdays, Wednesdays, Thursdays, and from home on Mondays and Fridays. If you are passionate about turning complex data into valuable insights and have experience in mentoring junior members and collaborating with peers, we encourage you to apply for this exciting opportunity.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

indore, madhya pradesh

On-site

As a Senior AI Developer/ AI Architect in the AI team, you will have the opportunity to collaborate with and mentor a team of developers. Your primary focus will be on the Fusion AI Team and its AI engine AI Talos, where you will work with Large language models, simulations, and Agentic AI to deliver cutting-edge AI capabilities in the service management space. Your responsibilities will include developing intricate python-based AI code to ensure the successful delivery of advanced AI functionalities. Additionally, you will play a crucial role in team mentoring, guiding junior/mid-level developers in managing their workload efficiently and ensuring tasks are completed according to the product roadmap. Innovation will be a key aspect of your role, where you will lead the team in staying updated on the latest AI trends, especially focusing on large language models and simulations. Furthermore, you will be responsible for software delivery to customers while adhering to standard security practices. To qualify for this role, you should possess a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Being Agile trained and practiced is also essential for this position. The ideal candidate will have at least 7 years of experience in developing AI/Data Science solutions, with a senior level of involvement. Proficiency in Python and its libraries such as Pydantic, Pytorch, Pyarrow, Scikit, Hugging Face, and Pandas is required. Extensive knowledge of AI models and usage, including Llama2, Mistral AI, training models for classification, and RAG architecture, is necessary. Experience as a full-stack developer and familiarity with tools like GitHub, Jira, Docker, as well as GPU-based services architecture and setup, are advantageous. In terms of competencies, strong interpersonal and communication skills are essential. You will collaborate with teams across the business to create end-to-end high-value use cases and effectively communicate with senior management regarding requirement deadlines. Your excellent collaboration and leadership skills will ensure that the team remains motivated and is working efficiently towards set targets. If you are ready to take on this challenging role and contribute to the advancement of AI technologies, we encourage you to apply now at Future@fusiongbs.com.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Data Scientist, you will be responsible for analyzing complex data using statistical and machine learning models to derive actionable insights. You will use Python for data analysis, visualization, and working with various technologies such as APIs, Linux OS, databases, big data technologies, and cloud services. Additionally, you will develop innovative solutions for natural language processing and generative modeling tasks, collaborating with cross-functional teams to understand business requirements and translate them into data science solutions. You will work in an Agile framework, participating in sprint planning, daily stand-ups, and retrospectives. Furthermore, you will research, develop, and analyze computer vision algorithms in areas related to object detection, tracking, product identification and verification, and scene understanding, ensuring model robustness, generalization, accuracy, testability, and efficiency. You will also be responsible for writing product or system development code, designing and maintaining data pipelines and workflows within Azure Databricks for optimal performance and scalability, and communicating findings and insights effectively to stakeholders through reports and visualizations. To qualify for this role, you should have a Master's degree in Data Science, Statistics, Computer Science, or a related field. You should have over 5 years of proven experience in developing machine learning models, particularly for time series data within a financial context. Advanced programming skills in Python or R, with extensive experience in libraries such as Pandas, NumPy, and Scikit-learn are required. Additionally, you should have comprehensive knowledge of AI and LLM technologies, with a track record of developing applications and models. Proficiency in data visualization tools like Tableau, Power BI, or similar platforms is essential. Exceptional analytical and problem-solving abilities, coupled with meticulous attention to detail, are necessary for this role. Superior communication skills are also required to enable the clear and concise presentation of complex findings. Extensive experience in Azure Databricks for data processing, model training, and deployment is preferred, along with proficiency in Azure Data Lake and Azure SQL Database for data storage and management. Experience with Azure Machine Learning for model deployment and monitoring, as well as an in-depth understanding of Azure services and tools for data integration and orchestration, will be beneficial for this position.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You should have 3-5 years of experience in writing and debugging intermediate to advance level Python code with a good understanding of concepts related to OOPS, APIs, and SQL Databases. Additionally, you should possess a strong grasp of fundamental basics of Generative AI, large language models (LLMs) pipelines like RAG, Open AI GPT models, and experience in NLP and Langchain. It is essential to be familiar with the AWS environment and services like S3, lambda, Step Functions, CloudWatch, etc. You should also have excellent analytical and problem-solving skills and be capable of working independently as well as collaboratively in a team-oriented environment. An analytical mind and business acumen are also important qualities for this role. You should demonstrate the ability to engage with client stakeholders at multiple levels and provide consultative solutions across different domains. It would be beneficial to have familiarity with Python libraries and frameworks such as Pandas, Scikit-learn, PyTorch, TensorFlow, BERT, GPT, or similar models, along with experience in Deep Learning and machine learning.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

Chubb is a world-renowned insurance leader with operations spanning across 54 countries and territories, offering a wide range of commercial and personal insurance solutions. Known for its extensive product portfolio, robust distribution network, exceptional financial stability, and global presence, Chubb is committed to providing top-notch services to its diverse clientele. The parent company, Chubb Limited, is publicly listed on the New York Stock Exchange (NYSE: CB) and is a constituent of the S&P 500 index, boasting a workforce of around 43,000 individuals worldwide. For more information, visit www.chubb.com. Chubb India is embarking on an exciting digital transformation journey fueled by a focus on engineering excellence and analytics. The company takes pride in being officially certified as a Great Place to Work for the third consecutive year, underscoring its culture that nurtures innovation, growth, and collaboration. With a talented team of over 2500 professionals, Chubb India promotes a startup mindset that encourages diverse perspectives, teamwork, and a solution-oriented approach. The organization is dedicated to honing expertise in engineering, analytics, and automation, empowering its teams to thrive in the ever-evolving digital landscape. As a Full Stack Data Scientist within the Advanced Analytics team at Chubb, you will play a pivotal role in developing cutting-edge data-driven solutions using state-of-the-art machine learning and AI technologies. This technical position involves leveraging AI and machine learning techniques to automate underwriting processes, enhance claims outcomes, and provide innovative risk solutions. Ideal candidates for this role possess a solid educational background in computer science, data science, statistics, applied mathematics, or related fields, coupled with a penchant for solving complex problems through innovative thinking while maintaining a keen focus on delivering actionable business insights. You should be proficient in utilizing a diverse set of tools, strategies, machine learning algorithms, and programming languages to address a variety of challenges. Key Responsibilities: - Collaborate with global business partners to identify analysis requirements, manage deliverables, present results, and implement models. - Leverage a wide range of machine learning, text and image AI models to extract meaningful features from structured and unstructured data. - Develop and deploy scalable and efficient machine learning models to automate processes, gain insights, and facilitate data-driven decision-making. - Package and publish codes and solutions in reusable Python formats for seamless integration into CI/CD pipelines and workflows. - Ensure high-quality code that aligns with business objectives, quality standards, and secure web development practices. - Build tools for streamlining the modeling pipeline, sharing knowledge, and implementing real-time monitoring and alerting systems for machine learning solutions. - Establish and maintain automated testing and validation infrastructure, troubleshoot pipelines, and adhere to best practices for versioning, monitoring, and reusability. Qualifications: - Proficiency in ML concepts, supervised/unsupervised learning, ensemble techniques, and various ML models including Random Forest, XGBoost, SVM, etc. - Strong experience with Azure cloud computing, containerization technologies (Docker, Kubernetes), and data science frameworks like Pandas, Numpy, TensorFlow, Keras, PyTorch, and sklearn. - Hands-on experience with DevOps tools such as Git, Jenkins, Sonar, Nexus, along with data pipeline building, debugging, and unit testing practices. - Familiarity with AI/ML applications, Databricks ecosystem, and statistical/mathematical domains. Why Chubb - Join a leading global insurance company with a strong focus on employee experience and a culture that fosters innovation and excellence. - Benefit from a supportive work environment, industry leadership, and opportunities for personal and professional growth. - Embrace a startup-like culture that values speed, agility, ownership, and continuous improvement. - Enjoy comprehensive employee benefits that prioritize health, well-being, learning, and career advancement. Employee Benefits: - Access to savings and investment plans, upskilling opportunities, health and welfare benefits, and a supportive work environment that encourages inclusivity and growth. Join Us: Your contributions are integral to shaping the future at Chubb. If you are passionate about integrity, innovation, and inclusion and ready to make a difference, we invite you to be part of Chubb India's journey. Apply Now: Chubb India Career Page,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

You will be responsible for developing clean and modular Python code for scalable data pipelines. Your role will involve using Pandas to drive data transformation and analysis workflows. Additionally, you will be required to integrate with LLM APIs such as OpenAI to build smart document solutions. Building robust REST APIs using FastAPI or Flask for data and document services will be a key aspect of this role. Experience working with Azure cloud services like Functions, Blob, and App Services is necessary. An added bonus would be the ability to integrate with MongoDB and support document workflows. This is a contract-to-hire position based in Gurgaon, with a duration of 6 months and following IST shift timings.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies