Home
Jobs

82 Scrapy Jobs - Page 3

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 - 0 Lacs

Bengaluru

Remote

GlassDoor logo

Job Description: We are seeking a creative and independent Web Crawler Developer to join our team Seattle based Construction Team. The ideal candidate will have a keen eye for detail, a passion for problem-solving, and the ability to think outside the box to develop sophisticated web scraping solutions. Responsibilities: - Design, implement, and maintain web crawlers that can effectively extract data from various websites . - Analyze web page structures and adapt crawlers to extract relevant information efficiently. - Monitor crawler performance and make necessary adjustments to ensure optimal data collection. - Work independently to identify new opportunities for data extraction and offer insightful recommendations. - Ensure compliance with legal and ethical standards for data scraping. - Collaborate with data analysts and other team members to understand data needs and improve data accuracy. - Keep up-to-date with the latest web scraping technologies and best practices Qualifications: - Strong experience with web scraping tools and frameworks (e.g., Scrapy, BeautifulSoup, Selenium, etc.). - Proficiency in programming languages such as Python, Java, or others relevant to web crawling. - Experience with handling and parsing different data formats like HTML, JSON, XML, etc. - Excellent problem-solving skills and the ability to think outside the box. - Ability to work independently and manage multiple tasks efficiently. - Solid understanding of web protocols (HTTP, HTTPS) and web technologies. - Familiarity with version control systems, preferably Git. - Knowledge of data privacy laws and ethical web scraping practices. Preferred: - Experience with cloud services like AWS or Azure for deploying and managing web crawlers. - Understanding of databases and data storage solutions. - Previous experience in a similar role or related projects. Job Type: Contractual / Temporary Contract length: 2 months Pay: ₹76,000.00 - ₹80,000.00 per month Benefits: Work from home Supplemental Pay: Performance bonus Expected Start Date: 03/06/2025

Posted 3 weeks ago

Apply

0.0 - 1.0 years

0 Lacs

Pitampura, Delhi, Delhi

On-site

Indeed logo

Job Title: Data Analyst (Python & Web Scraping Expert) Location : Netaji Subhash Place, Pitampura, New Delhi Department : Data Analytics / Share Recovery Job Overview: We are seeking a detail-oriented and results-driven Data Analyst to join our team. The ideal candidate will have expertise in Python programming, web scraping, and data analysis, with a focus on IEPF share recovery . The role involves collecting, processing, and analyzing data from multiple online sources, providing actionable insights to support business decision-making. Key Responsibilities: Data Scraping : Use Python and web scraping techniques to gather data from financial, regulatory, and shareholding-related websites for IEPF (Investor Education and Protection Fund) share recovery. Data Cleaning & Preprocessing : Clean, process, and structure raw data for analysis. Ensure data quality and integrity by identifying and correcting errors in datasets. Data Analysis & Visualization : Analyze large datasets to extract actionable insights regarding share recovery and trends in investor shareholding. Present findings through visualizations (e.g., graphs, dashboards). Reporting : Prepare and present detailed reports on share recovery patterns, trends, and forecasts based on analysis. Present findings to the management team to help drive business decisions. Automation & Optimization : Build and maintain automated web scraping systems to regularly fetch updated shareholding data, optimizing the data pipeline for efficiency. Collaboration : Work closely with business stakeholders to understand data requirements and deliver reports or visualizations tailored to specific needs related to IEPF share recovery. Required Skills & Qualifications: Technical Skills : Strong proficiency in Python for data analysis and automation. Expertise in web scraping using libraries such as BeautifulSoup , Selenium , and Scrapy . Experience with data manipulation and analysis using Pandas , NumPy , and other relevant libraries. Familiarity with SQL for data extraction and querying relational databases. Knowledge of data visualization tools like Matplotlib , Seaborn , or Tableau for presenting insights in an easy-to-understand format. Experience : Minimum of 2-3 years of experience as a Data Analyst or in a similar role, with a focus on Python programming and web scraping. Experience working with financial or investment data, particularly in areas such as IEPF , share recovery , or investor relations . Strong problem-solving skills with the ability to analyze complex datasets and generate actionable insights. Additional Skills : Strong attention to detail and ability to work with large datasets. Ability to work in a collaborative team environment. Familiarity with cloud platforms (e.g., AWS, Google Cloud) and data storage (e.g., databases, cloud data lakes) is a plus. Education : Bachelor’s or Master’s degree in Data Science , Computer Science , Statistics , Finance , or a related field. Soft Skills : Strong communication skills, with the ability to explain technical concepts to non-technical stakeholders. Ability to prioritize tasks and manage multiple projects simultaneously. Strong organizational skills and time management. Preferred Skills: Experience working in the financial industry or understanding of regulatory frameworks (e.g., IEPF regulations and procedures). Familiarity with machine learning models and predictive analytics for forecasting share recovery trends. Ability to automate workflows and optimize existing data collection pipelines. Job Requirements: Comfortable working in a fast-paced environment. Ability to think critically and provide insights that drive strategic decisions. Must be self-motivated and capable of working independently with minimal supervision. Willingness to stay updated with the latest data analysis techniques and web scraping technologies. Job Type: Full-time Pay: ₹20,000.00 - ₹32,000.00 per month Schedule: Day shift Education: Bachelor's (Preferred) Experience: total work: 1 year (Required) Work Location: In person

Posted 3 weeks ago

Apply

14.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Sigmoid enables business transformation using data and analytics, leveraging real-time insights to make accurate and fast business decisions, by building modern data architectures using cloud and open source. Some of the world’s largest data producers engage with Sigmoid to solve complex business problems. Sigmoid brings deep expertise in data engineering, predictive analytics, artificial intelligence, and DataOps. Sigmoid has been recognized as one of the fastest growing technology companies in North America, 2021, by Financial Times, Inc. 5000, and Deloitte Technology Fast 500. Offices: New York | Dallas | San Francisco | Lima | Bengaluru This role is for our Bengaluru office. Why Join Sigmoid? • Sigmoid provides the opportunity to push the boundaries of what is possible by seamlessly combining technical expertise and creativity to tackle intrinsically complex business problems and convert them into straight-forward data solutions. • Despite being continuously challenged, you are not alone. You will be part of a fast-paced diverse environment as a member of a high-performing team that works together to energize and inspire each other by challenging the status quo • Vibrant inclusive culture of mutual respect and fun through both work and play Roles and Responsibilities: • Convert broad vision and concepts into a structured data science roadmap, and guide a team to successfully execute on it. • Handling end-to-end client AI & analytics programs in a fluid environment. Your role will be a combination of hands-on contribution, technical team management, and client interaction. • Proven ability to discover solutions hidden in large datasets and to drive business results with their data-based insights • Contribute to internal product development initiatives related to data science. • Drive excellent project management required to deliver complex projects, including effort/time estimation. • Be proactive, with full ownership of the engagement. Build scalable client engagement level processes for faster turnaround & higher accuracy • Define Technology/ Strategy and Roadmap for client accounts, and guides implementation of that strategy within projects • Manage the team-members, to ensure that the project plan is being adhered to over the course of the project • Build a trusted advisor relationship with the IT management at clients and internal accounts leadership. Mandated Skills: • A B-Tech/M-Tech/MBA from a top tier Institute preferably in a quantitative subject • 14+ years of hands-on experience in applied Machine Learning, AI and analytics • Experience of scientific programming in scripting languages like Python, R, SQL, NoSQL, Spark with ML tools & Cloud Technology (AWS, Azure, GCP) • Experience in Python libraries such as numpy, pandas, scikit-learn, tensor-flow, scrapy, BERT etc. • Strong grasp of depth and breadth of machine learning, deep learning, data mining, and statistical concepts and experience in developing models and solutions in these areas • Expertise with client engagement, understanding complex problem statements, and offering solutions in the domains of Supply Chain, Manufacturing, CPG, Marketing etc. Desired Skills: ● Deep understanding of ML algorithms for common use cases in both structured and unstructured data ecosystems. ● Comfortable with large scale data processing and distributed computing ● Providing required inputs to sales, and pre-sales activities ● A self-starter who can work well with minimal guidance ● Excellent written and verbal communication skills Show more Show less

Posted 3 weeks ago

Apply

3.0 - 7.0 years

1 - 2 Lacs

Thane, Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Key Responsibilities: Develop and maintain automated web scraping scripts using Python libraries such as Beautiful Soup, Scrapy, and Selenium. Optimize scraping pipelines for performance, scalability, and resource efficiency. Handle dynamic websites, CAPTCHA-solving, and implement IP rotation techniques for uninterrupted scraping. Process and clean raw data, ensuring accuracy and integrity in extracted datasets. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Leverage APIs when web scraping is not feasible, managing authentication and request optimization. Document processes, pipelines, and troubleshooting steps for maintainable and reusable scraping solutions. Ensure compliance with legal and ethical web scraping practices, implementing security safeguards. Requirements: Education : Bachelors degree in Computer Science, Engineering, or a related field. Experience : 2+ years of Python development experience, with at least 1 year focused on web scraping. Technical Skills : Proficiency in Python and libraries like Beautiful Soup, Scrapy, and Selenium. Experience with regular expressions (Regex) for data parsing. Strong knowledge of HTTP protocols, cookies, headers, and user-agent rotation. Familiarity with databases (SQL and NoSQL) for storing scraped data. Hands-on experience with data manipulation libraries such as pandas and NumPy. Experience working with APIs and managing third-party integrations. Familiarity with version control systems like Git. Bonus Skills : Knowledge of containerization tools like Docker. Experience with distributed scraping solutions and task queues (e.g., Celery, RabbitMQ). Basic understanding of data visualization tools. Non-Technical Skills : Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and collaboratively in a team environmen

Posted 3 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Developing ETL Pipelines: Designing, developing, and maintaining scalable and adaptable data pipelines using Python or PySpark to facilitate the smooth migration of data from diverse data sources . Host these ETL pipelines in AWS EC2, AWS Glue or AWS EMR and store this data to cloud database services like Google BigQuery, AWS S3, Redshift, RDS, Delta Lake etc. This includes managing significant data migrations and ensuring seamless transitions between systems. Implementing Data Quality Check Framework: Establishing and executing data quality checks and validation pipelines using different tools like Python, PySpark, Athena or BigQuery, S3, Delta Lake to uphold the integrity and accuracy of our datasets. Creating Mechanisms for Generating ETL Migration Status Reports: Devising a framework to generate concise summary reports detailing data migration progress, alongside promptly alerting stakeholders to any failures within ETL pipelines. This ensures swift resolution of data discrepancies arising from pipeline failures. Implement this using standard SMTP, Python, AWS SNS, AWS SES, AWS S3, Delta Lake etc services. Data Transformations and Processing: Implementing various data encryption and decryption techniques using Python and PySpark libraries, in addition to generating insightful reports and analyses derived from processed data to aid in informed business decision-making. Development of APIs: Building APIs using frameworks such as Flask or Django, incorporating diverse authentication and authorization techniques to safeguard the exchange of data. Host these API’s on EC2 server using services like Gearman etc or Write API logics in lambda and host these API’s using API Gateway services of cloud. Code Versioning and Deployment: Leveraging GitHub extensively for robust code versioning, deployment of the latest code iterations, seamless transitioning between different code versions, and merging various branches to streamline development and code release processes. Automation: Designing and implementing code automation solutions to streamline and automate manual tasks effectively. Required Candidate Profile Soft Skills Must Have Demonstrates adept problem-solving skills to efficiently address complex challenges encountered during data engineering tasks. Exhibits clear and effective communication skills, facilitating seamless collaboration and comprehension across diverse teams and stakeholders. Displays proficiency in both independent and collaborative work dynamics, fostering productivity and synergy within a fast-paced team environment. Demonstrates a high level of adaptability to changing requirements, customer dynamics, and work demands. Self-motivated and responsible individual who takes ownership and initiative in tasks. Good To Have Demonstrates project management experience, offering valuable insights and contributions towards efficient project execution and delivery. Good Presentation skills Excellent customer handling skills. Technical Skills Proficiency in SQL (Structured Query Language) for querying and manipulating databases. Experience with relational database systems like MySQL, PostgreSQL, or Oracle and NoSQL databases like Mongo. Proficiency in object-oriented programming concepts such as encapsulation, inheritance, and polymorphism. Knowledge of data warehousing concepts and experience with data warehousing solutions like Amazon Redshift, Google BigQuery, or Snowflake. Experienced in developing ETL pipelines using Python, PySpark. Knowledge of Python libraries/frameworks like Pandas, NumPy, or Spark for data processing and analysis. Familiarity with big data processing frameworks like Apache Hadoop and Apache Spark for handling large-scale datasets and performing distributed computing. Knowledge of cloud-based services like AWS S3, AWS Glue, AWS EMR, AWS Lambda, Athena, Azure Data Lake, Google BigQuery, etc. Familiarity with version control systems like Git for managing codebase changes, collaborating with team members, and maintaining code quality. Experience with web scraping libraries and frameworks like BeautifulSoup, Scrapy, Puppeteer, Selenium, etc., is highly beneficial. Knowledge of regular expressions is useful for pattern matching and extracting specific data formats from text. Understanding of HTTP protocols and how web servers respond to requests, how to send requests to web servers, handle responses, and manage sessions and cookies is essential. Familiarity with XPath expressions or CSS selectors is important for targeting specific elements within the HTML structure. Required Experience The ideal candidate will have a minimum of 1-3 years of relevant experience in data engineering roles, with a demonstrated history of successfully developing and maintaining ETL pipelines, handling big data migrations, and ensuring data quality and validation. Must have excellent knowledge and programing capability using Python, PySpark working on any of the Cloud Platforms like AWS, Azure or Google. Role Industry Type: Engineering Functional Area: Data Engineering, Software Development, Automation Employment Type: Full Time, Permanent Role Category: System Design/Implementation Education : A minimum educational requirement is graduation. Here at Havas across the group we pride ourselves on being committed to offering equal opportunities to all potential employees and have zero tolerance for discrimination. We are an equal opportunity employer and welcome applicants irrespective of age, sex, race, ethnicity, disability and other factors that have no bearing on an individual’s ability to perform their job. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Description Data Engineer (Manager) - Web Scraping (Experience: 5 to 10 years) The Data Engineer specializing in web scraping will be responsible for designing, implementing, and maintaining automated systems to extract, process, and analyse data from various online sources. This role is critical for gathering valuable insights to support business decisions and strategies. Responsibilities ?Lead and manage a team of data engineers specializing in web scraping and data extraction. ?Design, develop, and maintain scalable web scraping pipelines and ETL processes. ?Collaborate with cross-functional teams to understand data requirements and deliver effective solutions. ?Ensure data quality, integrity, and security across all scraping systems. ?Optimize web scraping workflows for performance and efficiency. ?Evaluate and integrate new tools and technologies for web scraping and data processing. ?Develop and enforce best practices for web scraping, including compliance with legal and ethical standards. ?Provide mentorship and professional development opportunities for team members. Skills Required ?Proficiency in web scraping tools and frameworks (e.g., Scrapy, Beautiful Soup, Selenium). ?Strong programming skills in Python, Java, or similar languages. ?Experience with data storage solutions (SQL, NoSQL, cloud databases). ?Knowledge of APIs and data integration techniques. ?Familiarity with big data technologies (e.g., Hadoop, Spark). ?Leadership and team management skills. ?Excellent problem-solving and analytical abilities. Preferred Qualifications ?Bachelor?s or Master?s degree in Computer Science, Data Engineering, or related fields. ?Experience in handling large-scale data extraction projects. ?Knowledge of data governance and compliance regulations. Skills Required RoleData Engineer (Manager) - Web Scraping Industry TypeITES/BPO/KPO Functional AreaFinance/Accounts/Taxation Required EducationAny Graduates Employment TypeFull Time, Permanent Key Skills WEB SCRAPING TOOLS DATA PROCESSING DATABASE MANAGEMENT CLOUD PLATFORMS DATA VISUALIZATION Other Information Job CodeGO/JC/21526/2025 Recruiter NameSPriya Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Senior Python Developer Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 3–8 years About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, transforming enterprise sales with our GenAI-powered conversational intelligence and real-time agent assist suite. Our platform is used by high-growth enterprises across India, MENA, and Southeast Asia to improve sales productivity, personalize customer conversations, and unlock revenue intelligence in real-time. We are backed by marquee VCs, 30+ angel investors, and led by alumni from IITs, IIMs, and BITS with deep experience in building and scaling products from India for the world. Role Overview As a Senior Python Developer at Darwix AI, you will be at the core of our engineering team, leading the development of scalable, secure, and high-performance backend systems that support AI workflows, real-time data processing, and enterprise-grade integrations. This role requires deep technical expertise in Python, a strong foundation in backend architecture, and the ability to collaborate closely with AI, product, and infrastructure teams. You will take ownership of critical backend modules and shape the engineering culture in a rapidly evolving, high-impact environment. Key Responsibilities System Architecture & API Development Design, implement, and optimize backend services and microservices using Python frameworks such as FastAPI, Django, or Flask Lead the development of scalable RESTful APIs that integrate with frontend, mobile, and AI systems Architect low-latency, fault-tolerant services supporting real-time sales analytics and AI inference Data Pipelines & Integrations Build and optimize ETL pipelines to manage structured and unstructured data from internal and third-party sources Integrate APIs with CRMs, telephony systems, transcription engines, and enterprise platforms like Salesforce, Zoho, and LeadSquared Lead scraping and data ingestion efforts from large-scale, dynamic web sources using Playwright, BeautifulSoup, or Scrapy AI/ML Enablement Work closely with AI engineers to build infrastructure for LLM/RAG pipelines , vector DBs , and real-time AI decisioning Implement backend support for prompt orchestration , Langchain flows , and function-calling interfaces Support model deployment, inference APIs, and logging/monitoring for large-scale GenAI pipelines Database & Storage Design Optimize database design and queries using MySQL , PostgreSQL , and MongoDB Architect and manage Redis and Kafka for caching, queueing, and real-time communication DevOps & Quality Ensure continuous delivery through version control (Git), CI/CD pipelines, testing frameworks, and Docker-based deployments Identify and resolve bottlenecks related to performance, memory, or data throughput Adhere to best practices in code quality, testing, security, and documentation Leadership & Collaboration Mentor junior developers and participate in code reviews Collaborate cross-functionally with product, AI, design, and sales engineering teams Contribute to architectural decisions, roadmap planning, and scaling strategies Qualifications 4–8 years of backend development experience in Python, with a deep understanding of object-oriented and functional programming Hands-on experience with FastAPI , Django , or Flask in production environments Proven experience building scalable microservices, data pipelines, and backend systems that support live applications Strong command over REST API architecture , database optimization, and data modeling Solid experience working with web scraping tools , automation frameworks, and external API integrations Knowledge of AI tools like Langchain , HuggingFace , Vector DBs (Pinecone, Weaviate, FAISS) , or RAG architectures is a strong plus Familiarity with cloud infrastructure (AWS/GCP) , Docker, and containerized deployments Comfortable working in fast-paced, high-ownership environments with shifting priorities and dynamic problem-solving Bonus Prior experience in an early-stage SaaS startup or AI-first product environment Contributions to open-source Python projects or developer communities Experience working with real-time streaming systems (Kafka, Redis Streams, WebSockets) What We Offer Competitive fixed salary + performance-linked incentives Equity options for high-impact performers Opportunity to work on cutting-edge GenAI and SaaS products used by global enterprises Autonomy, rapid decision-making, and direct interaction with founders and senior leadership High-growth environment with clear progression toward Tech Lead or Engineering Manager roles Access to tools, compute, and learning resources to accelerate your technical and leadership growth Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Join our dynamic team as a Web Scraping Engineer and play a crucial role in driving our data-driven strategies. As a key player, you will develop and maintain innovative solutions to automate data extraction, parsing, and structuring from various online sources. Your expertise will empower our business intelligence, market research, and decision-making processes. If you are passionate about automation, dedicated to ethical practices, and have a knack for solving complex problems, we want you! Key Responsibilities Design, implement, and maintain web scraping solutions to collect structured data from publicly available online sources and APIs Parse, clean, and transform extracted data to ensure accuracy and usability for business needs Store and organize collected data in databases or spreadsheets for easy access and analysis Monitor and optimize scraping processes for efficiency, reliability, and compliance with relevant laws and website policies Troubleshoot issues related to dynamic content, anti-bot measures, and changes in website structure Collaborate with data analysts, scientists, and other stakeholders to understand data requirements and deliver actionable insights Document processes, tools, and workflows for ongoing improvements and knowledge sharing Requirements Proven experience in web scraping, data extraction, or web automation projects Proficiency in Python or similar programming languages, and familiarity with libraries such as BeautifulSoup, Scrapy, or Selenium Strong understanding of HTML, CSS, JavaScript, and web protocols Experience with data cleaning, transformation, and storage (e.g., CSV, JSON, SQL/NoSQL databases) Knowledge of legal and ethical considerations in web scraping, with a commitment to compliance with website terms of service and data privacy regulations Excellent problem-solving and troubleshooting skills Ability to work independently and manage multiple projects simultaneously Preferred Qualifications Experience with cloud platforms (AWS, GCP, Azure) for scalable data solutions Familiarity with workflow automation and integration with communication tools (e.g., email, Slack, APIs) Background in market research, business intelligence, or related fields Skills: data extraction,data cleaning,beautifulsoup,business intelligence,web automation,javascript,web scraping,data privacy regulations,web protocols,selenium,scrapy,sql,data transformation,nosql,css,market research,automation,python,html Show more Show less

Posted 4 weeks ago

Apply

1.0 years

0 Lacs

India

Remote

Linkedin logo

Location: Remote About the Role We are seeking an experienced Integration Developer with a strong background in Python and JavaScript frameworks . The ideal candidate will have a minimum of 1 year in software engineering, including at least 1 year dedicated to ethical web scraping and automation. This role offers flexibility with options for remote work and in-person collaboration. Requirements Bachelor’s degree in Computer Science, Information Technology, or a related field. Minimum of 1 year of professional experience in software engineering. At least 1 year of hands-on experience with web scraping and data extraction. Proficiency in Python and JavaScript, including frameworks such as Scrapy, BeautifulSoup, Puppeteer, or Playwright. Strong understanding of RESTful APIs and experience with API integration. Familiarity with handling anti-bot measures, including CAPTCHA solving and IP rotation. Experience with cloud platforms (e.g., AWS, Azure) and version control systems like Git. Excellent problem-solving skills and attention to detail. Key Responsibilities Develop and maintain automated web scraping solutions to extract structured data from various online platforms. Implement and manage API integrations to facilitate seamless data exchange between systems. Ensure compliance with ethical standards and legal guidelines in all web scraping activities . Collaborate with cross-functional teams to understand integration requirements and deliver scalable solutions. Monitor and optimize data pipelines for performance, reliability, and accuracy. Show more Show less

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

eGrove Systems Pvt Ltd is looking for Senior Python Developer to join its team of experts. Skill : Senior Python Developer. Exp : 4+Yrs. NP : Immediate to 15days. Location : Chennai/Madurai. Skills Requirement Hands-on software development skills, deep technical expertise across the entire software delivery process. Forward-thinking skilled individual. Structured, organized, and a good communicator. Write reusable, Testable, and Efficient code. Required Skills 3+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc. Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, start-ups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers. (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Urgent Hiring – Senior Python Developer (Web Scraping) Location: Ahmedabad (Work from Office) Joining: Immediate Joiners Only Experience: 2+ Years (Mandatory) Are you passionate about web scraping and ready to take on exciting data-driven projects? Actowiz Solutions is urgently hiring a skilled Senior Python Developer to join our dynamic team in Ahmedabad! Key Skills We're Looking For: • Strong hands-on experience with Scrapy framework • Deep understanding of XPath/CSS selectors, middleware & pipelines • Experience handling CAPTCHAs, IP blocks, and JS-rendered content • Familiar with proxy rotation, user-agent switching, and headless browsers • Proficient in data formats: JSON, CSV, and databases • Hands-on with Scrapy Splash / Selenium • Good knowledge of Pandas, Docker, AWS, and Celery How to Apply: Send your resume to hr@actowizsolutions.com / aanchalg.actowiz@gmail.com Contact HR: 8200674053 / 8401366964 Or DM me directly! If you’re ready to join a fast-paced team and work on global data projects, we’d love to hear from you! Feel free to like, share, or tag someone who might be a fit! #PythonJobs #WebScraping #Scrapy #ImmediateJoiner #AhmedabadJobs #PythonDeveloper #DataJobs #ActowizSolutions Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are hiring Python Developer! Role: Sr.Python Developer Work Locations: Teynampet, Chennai/Kk Nagar, Madurai Work from Office: (1pm to 10pm) Monday - Friday Mode of Interview: In-Person Experience : 3+ Years Required Skills: - 3+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. Interested candidates can send their resume to this mail id - dharshanamurthy.v@egrovesys.com Or WhatsApp - 9342768767 About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services . Established in 2008 , we are headquartered in East Brunswick, New Jersey , with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems . We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork . We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

We are accepting applications for experienced Data Engineer with a strong background in data scraping, cleaning, transformation, and automation. The ideal candidate will be responsible for building robust data pipelines, maintaining data integrity, and generating actionable dashboards and reports to support business decision-making. Key Responsibilities: Develop and maintain scripts for scraping data from various sources including APIs, websites, and databases. Perform data cleaning, transformation, and normalization to ensure consistency and usability across all data sets. Design and implement relational and non-relational data tables and frames for scalable data storage and analysis. Build automated data pipelines to ensure timely and accurate data availability. Create and manage interactive dashboards and reports using tools such as Power BI, Tableau, or similar platforms. Write and maintain data automation scripts to streamline ETL (Extract, Transform, Load) processes. Ensure data quality, governance, and compliance with internal and external regulations. Monitor and optimize the performance of data workflows and pipelines. Qualifications & Skills: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Minimum of 5 years of experience in a data engineering or similar role. Proficient in Python (especially for data scraping and automation), and strong hands-on experience with Pandas, NumPy , and other data manipulation libraries. Experience with web scraping tools and techniques (e.g., BeautifulSoup, Scrapy, Selenium). Strong SQL skills and experience working with relational databases (e.g., PostgreSQL, MySQL) and data warehouses (e.g., Redshift, Snowflake, BigQuery). Familiarity with data visualization tools like Power BI, Tableau, or Looker. Knowledge of ETL tools and orchestration frameworks such as Apache Airflow, Luigi, or Prefect . Experience with version control systems like Git and collaborative platforms like Jira or Confluence . Strong understanding of data security, privacy , and governance best practices. Excellent problem-solving skills and attention to detail. Preferred Qualifications: Experience with cloud platforms such as AWS, GCP, or Azure. Familiarity with NoSQL databases like MongoDB, Cassandra, or Elasticsearch. Understanding of CI/CD pipelines and DevOps practices related to data engineering. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Title: Python AI Backend Developer Pay Bracket: INR 6 – 10 LPA (commensurate with skills & experience) Location: Mumbai (Andheri W) Company: ZANG – the AI-powered e-commerce search engine About ZANG ZANG is re-imagining online shopping with a unified, AI-driven search experience that pinpoints the right product—at the right price—across every major marketplace. We’re an early-stage, venture-backed team that moves fast, ships often, and sweats the details. The Role We’re looking for a Python AI Backend Developer who can turn cutting-edge AI models and readily available AI services into production-ready, automated workflows. You’ll design the back-end engines that power our search, recommendation, and data-ingestion pipelines—so shoppers get relevant results in milliseconds. Key Responsibilities What you’ll own AI Workflow Engineering • Orchestrate end-to-end workflows (ingestion → processing → vector indexing → API) using Airflow, Prefect, or similar. • Combine open-source LLMs/embeddings (e.g., OpenAI, Hugging Face) with in-house models to deliver ranking, personalization, and semantic search. Backend Development • Design and scale Python micro-services (FastAPI/Falcon/Flask) that expose clean REST & gRPC endpoints. • Implement robust authentication, rate-limiting, and logging/observability. Data & Scraping Pipelines • Maintain modular scrapers for key e-commerce sites; handle rotating proxies, CAPTCHA, and dynamic pages. • Transform raw HTML into structured datasets ready for model training and search indexing. Storage & Retrieval • Optimise vector / NoSQL stores (Pinecone, Milvus, MongoDB, Elasticsearch) for low-latency reads and high-throughput writes. • Implement data versioning and retention policies. Collaboration & Reporting • Work closely with front-end, DevOps, and product teams to ship features on time. • Write clear technical docs and participate in code reviews. Must-Have Skills Expert-level Python (type hints, async IO, packaging, unit tests). Hands-on with web-scraping stacks: Scrapy / BeautifulSoup / Selenium / Playwright . Solid grasp of AI/ML & NLP concepts and libraries (PyTorch, TensorFlow, spaCy, transformers). Experience integrating third-party AI APIs (OpenAI, Cohere, Google Gemini, etc.). Production experience with REST/gRPC , containerisation (Docker), and CI/CD. Working knowledge of vector databases or search engines (e.g., Pinecone, Qdrant, Elasticsearch). Git-centric workflow and comfort with Agile/GitHub boards. Nice-to-Have Prior work on e-commerce or large-scale product-catalogue data. AWS stack (ECS, Lambda, S3, Step Functions) or equivalent cloud experience. Familiarity with recommendation systems, learning-to-rank, or reinforcement-learning-to-rank. Knowledge of orchestration tools (Airflow, Prefect, Dagster). What’s in It for You Competitive pay + stock options. The freedom to choose the best tools and ship production code from Day 1. A front-row seat in a high-growth startup solving a real consumer pain-point. A culture that values clear thinking, quick execution, and continuous learning. How to Apply Skip the queue—email your résumé (or GitHub/portfolio) to amit.letsbegin@gmail.com with the subject “Python AI Backend Developer – ZANG” . We’ll set up a quick call to dive deeper into the role and your work. If building scalable AI products is your idea of fun, we’d love to hear from you. Show more Show less

Posted 1 month ago

Apply

3.0 - 7.0 years

1 - 2 Lacs

Mumbai, Thane, Navi Mumbai

Work from Office

Naukri logo

Key Responsibilities: Develop and maintain automated web scraping scripts using Python libraries such as BeautifulSoup, Scrapy, and Selenium. Optimize scraping pipelines for performance, scalability, and resource efficiency. Handle dynamic websites, CAPTCHA-solving, and implement IP rotation techniques for uninterrupted scraping. Process and clean raw data, ensuring accuracy and integrity in extracted datasets. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Leverage APIs when web scraping is not feasible, managing authentication and request optimization. Document processes, pipelines, and troubleshooting steps for maintainable and reusable scraping solutions. Ensure compliance with legal and ethical web scraping practices, implementing security safeguards. Requirements: Education : Bachelors degree in Computer Science, Engineering, or a related field. Experience : 2+ years of Python development experience, with at least 1 year focused on web scraping. Technical Skills : Proficiency in Python and libraries like BeautifulSoup, Scrapy, and Selenium. Experience with regular expressions (Regex) for data parsing. Strong knowledge of HTTP protocols, cookies, headers, and user-agent rotation. Familiarity with databases (SQL and NoSQL) for storing scraped data. Hands-on experience with data manipulation libraries such as pandas and NumPy. Experience working with APIs and managing third-party integrations. Familiarity with version control systems like Git. Bonus Skills : Knowledge of containerization tools like Docker. Experience with distributed scraping solutions and task queues (e.g., Celery, RabbitMQ). Basic understanding of data visualization tools. Non-Technical Skills : Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and collaboratively in a team environment. CANDIDATES AVAILABLE FOR FACE-TO-FACE INTERVIEWS ARE PREFERRED.

Posted 1 month ago

Apply

2.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

The D. E. Shaw group is a global investment and technology development firm with more than $65 billion in investment capital as of December 1, 2024, and offices in North America, Europe, and Asia. Since our founding in 1988, our firm has earned an international reputation for successful investing based on innovation, careful risk management, and the quality and depth of our staff. We have a significant presence in the world's capital markets, investing in a wide range of companies and financial instruments in both developed and developing economies. We are looking for resourceful and exceptional candidates for the Data Engineer role within our product development teams based out of Hyderabad. At DESIS, the Data Engineers develop Web Robots, or Web Spiders, that crawl through the web and retrieve data in the form of HTML, plain text, PDFs, Excel, and any other format that is either structured or unstructured. The job functions of the engineer also include scraping the website data into a structured format and building automated and custom reports on the downloaded data that are used as knowledge for business purposes. The team also works on automating end-to-end data pipelines. WHAT YOU'LL DO DAY-TO-DAY: As a member of the Data Engineering team, you will be responsible for various aspects of data extraction, such as understanding the data requirements of the business group, reverse-engineering the website, its technology, and the data retrieval process, re-engineering by developing web robots to automate the extraction of the data, and building monitoring systems to ensure the integrity and quality of the extracted data. You will also be responsible for managing the changes to the website's dynamics and layout to ensure clean downloads, building scraping and parsing systems to transform raw data into a structured form, and offering operations support to ensure high availability and zero data losses. Additionally, you will be involved in other tasks such as storing the extracted data in the recommended databases, building high-performing, scalable data extraction systems, and automating data pipelines. WHO WE’RE LOOKING FOR: Basic qualifications: 2-4 years of experience in website data extraction and scraping Good knowledge of relational databases, writing complex queries in SQL, and dealing with ETL operations on databases Proficiency in Python for performing operations on data Expertise in Python frameworks like Requests, UrlLib2, Selenium, Beautiful Soup, and Scrapy A good understanding of HTTP requests and responses, HTML, CSS, XML, JSON, and JavaScript Expertise with debugging tools in Chrome to reverse engineer website dynamics A good academic background and accomplishments A BCA/MCA/BS/MS degree with a good foundation and practical application of knowledge in data structures and algorithms Problem-solving and analytical skills Good debugging skills Interested candidates can apply through our website: https://www.deshawindia.com/recruit/jobs/Adv/Link/SnrMemDEFeb25 We encourage candidates with relevant experience looking to restart their careers after a break to apply for this position. Learn about Recommence, our gender-neutral return-to-work initiative. The Firm offers excellent benefits, a casual, collegial working environment, and an attractive compensation package. For further information about our recruitment process, including how applicant data will be processed, please visit https://www.deshawindia.com/careers Members of the D. E. Shaw group do not discriminate in employment matters on the basis of sex, race, colour, caste, creed, religion, pregnancy, national origin, age, military service eligibility, veteran status, sexual orientation, marital status, disability, or any other protected class. Show more Show less

Posted 1 month ago

Apply

2.0 - 4.0 years

0 Lacs

India

On-site

Linkedin logo

Alternative Path is seeking skilled software developers to collaborate on client projects with an asset management firm. In this role, you will collaborate with individuals across various company departments to shape and innovate new products and features for our platform, enhancing existing ones. You will have a large degree of independence and trust, but you won't be isolated; the support of the Engineering team leads, the Product team leads, and every other technology team member is behind you. This is an opportunity to join a team-first meritocracy and help grow an entrepreneurial group inside Alternative Path. You will be asked to contribute, given ownership, and will be expected to make your voice heard. Role Summary: Performing Web Scraping using various scraping techniques and then utilizing Python’s Pandas library for data cleaning and manipulation. Then ingesting the data into a Database/Warehouse, and scheduling the scrapers using Airflow or other tools Role Overview The Web Scraping Team at Alternative Path is seeking a creative and detail-oriented developer to contribute to client projects. The team develops essential applications, datasets, and alerts for various teams within the client's organization, supporting their daily investment decisions. The mission is to maintain operational excellence by delivering high-quality proprietary datasets, timely notifications, and exceptional service. We are seeking someone who is self-motivated, self-sufficient, with a passion for tinkering and a love for automation. In your role, you will: ➢ Collaborate with analysts to understand and anticipate requirements. ➢ Design, implement, and maintain Web scrapers for a wide variety of alternative datasets. ➢ Perform Data Cleaning, Exploration, Transformation etc. of scraped data. ➢ Collaborate with cross-functional teams to understand data requirements and implement efficient data processing workflows. ➢ Author QC checks to validate data availability and integrity. ➢ Maintain alerting systems and investigate time-sensitive data incidents to ensure smooth day-to-day operations. ➢ Design and implement products and tools to enhance the Web scraping Platform. Qualifications Must have ➢ Bachelor's/master’s degree in computer science or in any related field ➢ 2-4 years of software development experience ➢ Strong Python and SQL/Database skills ➢ Strong expertise in using the Pandas library (Python) is a must ➢ Experience with web technologies (HTML/JS, APIs, etc.) ➢ Proven work experience in working with large data sets for Data cleaning, Data transformation, Data manipulation, and Data replacements. ➢ Excellent verbal and written communication skills ➢ Aptitude for designing infrastructure, data products, and tools for Data Scientists Preferred ➢ Familiarity with scraping and common scraping tools (Selenium, scrapy, Fiddler, Postman, xpath) ➢ Experience containerizing workloads with Docker (Kubernetes a plus) ➢ Experience with build automation (Jenkins, Gitlab CI/CD) ➢ Experience with AWS technologies like S3, RDS, SNS, SQS, Lambda, etc. Show more Show less

Posted 1 month ago

Apply

0.0 - 1.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

Python Developer Experience: 1- 3 years Responsibilities: · Develop and maintain web applications using Python frameworks like Django and Flask. · Utilize data analysis libraries such as NumPy, pandas, and matplotlib for extracting insights from large datasets. · Implement machine learning models using TensorFlow and PyTorch for various applications. · Handle HTTP requests efficiently using libraries like Requests. · Perform web scraping tasks using Beautiful Soup and Scrapy. · Collaborate with cross-functional teams to design and implement scalable solutions. · Ensure the security and integrity of data by integrating with database systems like SQL (e.g., PostgreSQL, MySQL) or NoSQL databases (e.g., MongoDB). · Stay updated with the latest trends and technologies in the Python ecosystem. Requirements: · Bachelor's degree in Computer Science, Engineering, or related field. · Strong proficiency in Python programming language. · Experience with Python frameworks such as Django and Flask for web development. · Familiarity with data analysis libraries like NumPy, pandas, and matplotlib. · Knowledge of machine learning frameworks such as TensorFlow and PyTorch. · Experience in handling HTTP requests and web scraping. · Understanding of database systems like SQL (e.g., PostgreSQL, MySQL) or NoSQL databases (e.g., MongoDB). · Excellent problem-solving skills and attention to detail. · Ability to work independently as well as in a team environment. · Strong communication and interpersonal skills. Job Type: Full Time, Permanent, Regular 5 Days Working Location: Plot no.968, Sector-82, JLPL Industrial Area, Mohali, Punjab (140306) Job Types: Full-time, Permanent Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Performance bonus Ability to commute/relocate: Mohali, Punjab 140306: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Preferred) Experience: Total work: 1 year (Required) Python: 1 year (Required) Work Location: In person

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Position: Data Engineer/Analyst Experience: Minimum 5 years in data engineer/analyst roles Location: Sector 20, Gurugram, Haryana, India Job Type: Full-Time Working Hours: 7:00 AM – 4:00 PM IST, Monday to Friday Key Responsibilities: Data Extraction & Automation Build or maintain scripts/APIs to automatically extract business data (name, address, phone, etc.) from Google Maps and other relevant sources. Clean and structure the scraped data for sales/marketing campaigns. Customer & Lead Data Analysis Analyse existing customer and lead datasets to identify trends, segments, and upsell/cross-sell opportunities. Create and maintain dashboards or reports to monitor KPIs and campaign effectiveness. Implement customer segmentation models to support targeted outreach strategies. Telemarketing Data Analysis Analyse cold calling and telemarketing performance data to uncover productivity gaps and suggest improvements. Help optimise call strategies by identifying the best times, industries, and segments to target. Collaboration & Strategy Work with sales and marketing teams to translate business needs into technical solutions. Recommend data-driven strategies to improve conversion, retention, and sales growth. Required Skills & Experience Proven experience as a Data Analyst, Data Engineer, or similar role. Experience with web scraping tools (e.g., Python with Beautiful Soup, Selenium, Scrapy, or Google Maps API). Strong skills in SQL, Python or R for data manipulation and analysis. Proficiency with data visualisation tools (e.g., Power BI, Tableau, Looker, or Google Data Studio). Understanding of sales and marketing workflows, particularly lead generation and telemarketing processes. Experience in working with CRM platforms and marketing data is a plus Preferred Qualifications: Bachelor’s degree in Data Science, Computer Science, Statistics, or a related field. Familiarity with privacy regulations and best practices for data handling. Previous experience in B2B sales environments is highly desirable. Team HR Vision tech shweta.visiotech03@gmail.com 8368356119 Job Type: Full-time Pay: ₹35,000.00 - ₹40,000.00 per month Benefits: Provident Fund Schedule: Morning shift Supplemental Pay: Performance bonus Work Location: In person

Posted 1 month ago

Apply

3.0 - 6.0 years

0 Lacs

Sadar, Uttar Pradesh, India

On-site

Linkedin logo

Profile : Python Developer Experience : 3 To 6 Years Requirement : Expertise in Python Development, AWS, Web Crawling, Databases (MYSQL, SQL SERVER), etc. Location : Work From Office (Work From Office) Working Days : 5 Prefer Immediate Joiners. Job Description Collaborate with development teams to identify and define application requirements. Develop scalable and efficient code using the Python programming language. Conduct thorough testing and debugging to ensure functionality and performance. Design and implement back-end components to enhance system performance. Evaluate and prioritize client feature requests, aligning them with business objectives. Integrate data storage solutions to improve application efficiency. Optimize existing databases to streamline operations and enhance functionality. Develop digital tools for monitoring and analyzing online traffic, providing insights for continuous improvement. Write clean, efficient, and scalable code to support application growth. Perform rigorous testing to ensure program stability and reliability. Enhance existing systems by implementing necessary improvements and updates. Coordinate with internal teams to gather user requirements and deliver tailored technical solutions. Develop and maintain web scrapers using Python libraries such as Scrapy, BeautifulSoup, Selenium, and Requests. Automate data extraction, processing, and storage pipelines to optimize workflow efficiency. Work with SQL databases (MySQL, PostgreSQL, etc.) to ensure optimal data storage and retrieval. Handle API integrations for seamless third-party data collection and automation. Implement proxy management, CAPTCHA bypass techniques, and session handling to ensure uninterrupted data extraction. Ensure compliance with legal and ethical guidelines in web scraping practices. Collaborate with data analysts and developers to seamlessly integrate extracted data into applications. Troubleshoot and resolve issues related to web crawling and automation scripts to maintain smooth operations. (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Skills: Python, TensorFlow, PyTorch, Scikit-learn, NLP, Pandas, NumPy, Data Visualization, About Us We are a growing tech company based in Ahmedabad, building AI-powered enterprise applications for the BFSI sector, government bodies, and e-Auction and ProcureToPay domains. Our platforms include eAuction, FinTech Applications, and procurement automationpowered by cutting-edge AI technologies. Requirements Final year student / recent graduate (Computer Science, IT, Data Science, etc.) Solid foundation in Python programming Academic or personal project experience in any ML framework (XGBoost preferred) Familiar with or eager to learn: Vector Databases (FAISS, Weaviate) LangChain / RAG framework Scrapy or web scraping tools Pandas, Numpy, Scikit-learn, Transformers Nice to Have Exposure to OpenAI API, HuggingFace, or LLM-based projects Interest in Finanance / eAuction / Procure To Pay / e-Governance domains Git or version control understanding What We Offer Chance to work upon the Live AI projects Performance-based full-time opportunity after internship A collaborative, growth-focused work culture Show more Show less

Posted 1 month ago

Apply

0.0 - 1.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

Python Developer Experience: 1- 3 years Responsibilities: · Develop and maintain web applications using Python frameworks like Django and Flask. · Utilize data analysis libraries such as NumPy, pandas, and matplotlib for extracting insights from large datasets. · Implement machine learning models using TensorFlow and PyTorch for various applications. · Handle HTTP requests efficiently using libraries like Requests. · Perform web scraping tasks using Beautiful Soup and Scrapy. · Collaborate with cross-functional teams to design and implement scalable solutions. · Ensure the security and integrity of data by integrating with database systems like SQL (e.g., PostgreSQL, MySQL) or NoSQL databases (e.g., MongoDB). · Stay updated with the latest trends and technologies in the Python ecosystem. Requirements: · Bachelor's degree in Computer Science, Engineering, or related field. · Strong proficiency in Python programming language. · Experience with Python frameworks such as Django and Flask for web development. · Familiarity with LangChain/LlamaIndex/ML/vectoryDB technologies will be a plus, enhancing team's capabilities in data management and analysis. · Proficiency in deploying Python applications to AWS, Azure, or GCP cloud platforms. · Experience with FastAPI in API development framework, facilitating efficient and robust development of web APIs to support our software ecosystem. · Familiarity with data analysis libraries like NumPy, pandas, and matplotlib. · Knowledge of machine learning frameworks such as TensorFlow and PyTorch. · Experience in handling HTTP requests and web scraping. · Understanding of database systems like SQL (e.g., PostgreSQL, MySQL) or NoSQL databases (e.g., MongoDB). · Excellent problem-solving skills and attention to detail. · Ability to work independently as well as in a team environment. · Strong communication and interpersonal skills. Job Type: Full Time, Permanent, Regular 5 Days Working Location: Plot no.968, Sector-82, JLPL Industrial Area, Mohali, Punjab (140306) Supplement Pay: Yearly Performance Based Bonus, Overtime pays, and others. Benefits: Personality and Training Development, Paid leave, Mensuration leaves, Maternity leaves, Compensatory Off and other special leaves. Job Types: Full-time, Permanent Pay: ₹15,114.49 - ₹40,000.00 per month Schedule: Morning shift Supplemental Pay: Overtime pay Performance bonus Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) Work Location: In person

Posted 1 month ago

Apply

0.0 - 1.0 years

0 Lacs

Jaipur, Rajasthan

On-site

Indeed logo

Experience : 1+ year Location: Jaipur Roles and Responsibilities Expert in Python, with knowledge of at least one Python web framework, such as Django, Flask, etc Develop and maintain service that extracts websites data using scrapers and APIs across multiple websites Understanding of the threading limitations of Python, and multi-process architecture Extract structured / unstructured data Writing reusable, testable, and efficient code Manage testing and bug fixes Knowledge of scraping and scraping frameworks, APIs(integration and creation) and web crawlers. Familiarity with front-end technologies (like JavaScript and HTML5). Performance Goals Sets and achieves realistic, challenging goals, honours commitments and delivers on promises Deal with clients and other employees professionally always Strong team player, able to collaborate effectively with the team to deliver software on time and within budget Ability to work independently Should have the good analytical and problem-solving capability Excellent verbal and written communication Qualifications- Good knowledge of Python, MySQL Programming experience in relational platforms like MySQL. Non-relational platforms like DynamoDB/MongoDB (no-SQL) would be added on Good knowledge of web scraping and APIs Familiarity with some scraping tools like BeautifulSoup and Selenium etc Development Experience with strong skills using the Pandas, Json, Multiprocessing, and NumPy libraries Good troubleshooting and debugging skills Strong interpersonal, verbal and written communication skills in English. Job Type: Full-time Pay: ₹7,000.00 - ₹35,000.00 per month Schedule: Day shift Ability to commute/relocate: Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Beautiful Soup or Scrapy: 1 year (Required) APIs: 1 year (Preferred) Python: 1 year (Required) Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 The Team : As a member of the Data Transformation - Cognitive Engineering team you will work on building and deploying ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead deployment of AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. What’s In It For You Be a part of a global company and build solutions at enterprise scale Lead a highly skilled and technically strong team (including leadership) Contribute to solving high complexity, high impact problems Build production ready pipelines from ideation to deployment Responsibilities Design, Develop and Deploy ML powered products and pipelines Mentor a team of Senior and Junior data scientists / ML Engineers in delivering large scale projects Play a central role in all stages of the AI product development life cycle, including: Designing Machine Learning systems and model scaling strategies Research & Implement ML and Deep learning algorithms for production Run necessary ML tests and benchmarks for model validation Fine-tune, retrain and scale existing model deployments Extend existing ML library’s and write packages for reproducing components Partner with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Interpret results and present them to business leaders Manage production pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Lead and mentor by example, including project scrums Technical Requirements Proven track record as a senior / lead ML engineer Expert proficiency in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Excellent exposure to large scale model deployment strategies and tools Excellent knowledge of ML & Deep Learning domain Solid exposure to Information Retrieval, Web scraping and Data Extraction at scale Exposure to the following technologies - R-Shiny/Dash/Streamlit, SQL, Docker, Airflow, Redis, Celery, Flask/Django/FastAPI, PySpark, Scrapy Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Open to learning new technologies and programming languages as required A Master’s / PhD from a recognized institute in a relevant specialization Good To Have 6-7+ years of relevant experience in ML Engineering Prior substantial experience from the Economics/Financial industry Prior work to show on Github, Kaggle, StackOverflow etc. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315679 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less

Posted 1 month ago

Apply

6.0 - 7.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

About the Role: Grade Level (for internal use): 10 The Team : As a member of the Data Transformation - Cognitive Engineering team you will work on building and deploying ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead deployment of AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. What’s in it for you: Be a part of a global company and build solutions at enterprise scale Lead a highly skilled and technically strong team (including leadership) Contribute to solving high complexity, high impact problems Build production ready pipelines from ideation to deployment Responsibilities: Design, Develop and Deploy ML powered products and pipelines Mentor a team of Senior and Junior data scientists / ML Engineers in delivering large scale projects Play a central role in all stages of the AI product development life cycle, including: Designing Machine Learning systems and model scaling strategies Research & Implement ML and Deep learning algorithms for production Run necessary ML tests and benchmarks for model validation Fine-tune, retrain and scale existing model deployments Extend existing ML library’s and write packages for reproducing components Partner with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Interpret results and present them to business leaders Manage production pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Lead and mentor by example, including project scrums Technical Requirements: Proven track record as a senior / lead ML engineer Expert proficiency in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Excellent exposure to large scale model deployment strategies and tools Excellent knowledge of ML & Deep Learning domain Solid exposure to Information Retrieval, Web scraping and Data Extraction at scale Exposure to the following technologies - R-Shiny/Dash/Streamlit, SQL, Docker, Airflow, Redis, Celery, Flask/Django/FastAPI, PySpark, Scrapy Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Open to learning new technologies and programming languages as required A Master’s / PhD from a recognized institute in a relevant specialization Good to have: 6-7+ years of relevant experience in ML Engineering Prior substantial experience from the Economics/Financial industry Prior work to show on Github, Kaggle, StackOverflow etc. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315679 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies