Home
Jobs

13 Beautiful Soup Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

14 - 22 Lacs

Pune

Work from Office

Responsibilities: * Design, develop, test and maintain scalable Python applications using Scrapy, Selenium and Requests. * Implement anti-bot systems and data pipeline solutions with Airflow and Kafka. Share CV on recruitment@fortitudecareer.com Flexi working Work from home

Posted 1 week ago

Apply

2.0 - 7.0 years

5 - 10 Lacs

Bangalore Rural, Bengaluru

Work from Office

About Norwin technologies Norwin Technologies is part of Accscient Group which has 17 companies as part of the group, we are 20 years old US based IT Consulting and Services company that helps clients deliver technology and operational excellence specializing in IT Infrastructure. Norwin Technologies was founded in 2003. We are headquartered in Boston, and now operate with offices located in four countries (USA, India, Canada, and Mexico). Norwin has currently 700+ professionals working across these countries. In India we have offices in Bangalore, Hyderabad, Chennai and Pune and employees 150 technical professionals in India We work with some of the leading brands in the technology industry such as Dell Technologies, VMware, Ahead partnering with them to deliver IT infrastructure projects ranging from remote deployment, migrations, upgrade and operations of technologies both in Data Centres and Cloud. Why Norwin Technologies? We are a strategic partner to companies like Dell, VmWare, which means you will get opportunity to work on same projects as Dell or VmWare employees unlike at Indian services companies, the likes of Wipro, TCS, Infy etc. which work largely on support projects. So as a Norwin employee you get to upgrade your technical skills. We are very focused on developing the skills of our technical talent through training, certification, and mentorship. We are agile and fast paced organization we aim to grow to 3 times our existing headcount within next 12 months with a focus on employee development and well-being with provides an opportunity to grow in their careers. All of our employees have experience of working in large and leading MNC. We are also a Great Place to Work certified company. Job Title: Smart Python Programmer (2.5+ Years Experience) Location: 5 Days Office Employment Type: Full-Time About the Role: Were looking for smart, high-agency Python programmers with a solid understanding of modern AI tooling and programming workflows. This role isn't about checking boxes or having built a specific kind of AI pipeline it's about your ability to leverage the Python ecosystem and AI tools to build smart solutions faster and better . If you're someone who figures things out quickly, enjoys working with cutting-edge AI tooling, and is always finding smarter ways to code this is the place for you. What You’ll Work On: Building intelligent and scalable systems using Python and modern libraries Designing and integrating data ingestion pipelines for AI/ML workflows Parsing, chunking, embeddings, storage, retrieval (dense/hybrid search, etc.) Using and combining libraries and tools like: pandas, numpy, scipy, matplotlib, seaborn spacy, beautifulsoup, scikit-learn, keras, tensorflow, pytorch, etc. Applying prompt engineering frameworks such as COSTAR, CARE, RACE, RISE , etc. Working with AI coding accelerators: Replit, Cursor, Codeium, Copilot , and more Rapid prototyping and iterating on AI-enhanced software products What We’re Really Looking For: Smart coders who learn fast and ship faster People who leverage AI tools effectively to increase velocity Deep comfort with the Python ecosystem — you know what to use and when Curiosity and the drive to constantly improve your coding stack and speed Requirements: 2.5+ years of professional experience in Python programming Strong experience with data manipulation, ML libraries, and model integration Experience building or integrating data ingestion pipelines (text-centric focus is a plus) Solid grasp of prompt engineering concepts and frameworks Comfort using AI-enhanced development tools to optimize workflows Bonus Points: Exposure to vector databases and retrieval methods (e.g., FAISS, Pinecone) Experience with LLM applications and prompt-tuning Open-source contributions or cool personal projects Reference Links: www.norwintechnologies.com www.accscient.com Regards, Ipshita Saha Talent Acquisition Specialist Norwin Technologies IPSHITA SAHA | LinkedIn

Posted 1 week ago

Apply

3.0 - 6.0 years

8 - 10 Lacs

Gurugram

Work from Office

Designation- Python Developer & Web Crawling Experience- 3+ Years Location- Gurgaon, Haryana ABOUT US: Founded in 1998, BYLD is the largest group in the South Asian region, offering technology-enabled HR and business productivity solutions. We have served 5,00,000+ individuals, worked with more than 50% of the Fortune 500 and over 60% of the Business World top 1000 companies. Please read about us www.byldgroup.com Role & Responsibilities: Collaborate with cross-functional teams to define, design, and implement new features. Ensure the performance, quality, and responsiveness of web crawling systems. Identify and correct bottlenecks and fix bugs in web crawling processes. Help maintain code quality, organization, and automation. Stay up-to-date with the latest industry trends and technologies. Skills: 3+ years of experience in Web Scraping or Crawling through Scrapy, Selenium or other frameworks and related libraries (like BeautifulSoup, puppeteer). Should be an expert on the latest version of Python. Should have very good experience on fetching data from multiple online sources, cleanse it and build APIs on top of it. Good understanding of data structure and algorithms, as well as how they affect system performance in real world applications Sound knowledge in bypassing Bot Detection techniques. Web RestFul APIs / Microservices Development Experience. Think deeply about developing large scale scraping tools including data integrity, health and monitoring systems. Develop a deep understanding of our vast data sources on the web and know exactly how, when, and which data to scrape, parse and store. Work with SQL and NoSQL databases to store raw data. Develop frameworks for automating and maintaining constant flow of data from multiple sources. Good knowledge of distributed technologies, real-time systems of high throughput, low latency, and highly scalable systems. Work independently with little supervision to research and test innovative solutions. Should have a strong passion for coding. Must take quality, security and performance seriously. Ability to pair with other engineers and cross-team as needed. Excellent communication skills, including the ability to present effectively to both business and technical Interested candidates can share updated cv at talentacquisition.aad@byldgroup.com

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Python, Selenium, and BeautifulSoup. headless browsers distributed web scraping parsing HTML and XML using BeautifulSoup and performing data extractionStrong knowledge of browser automation tools (e.g., ChromeDriver, GeckoDriver).

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad

Hybrid

Hello, Greetings from GlobalData!!! Hope you are doing good. Urgent job openings for Python - (Web scraping/crawling) for below roles @ GD-Hyderabad :- Senior Software Engineer - 5+ yrs experience Team Lead - 7+ yrs exp Assistant Manager - 10+ yrs exp Job Description given below please go through to understand requirement. If interested to apply please share your updated resume @ mail id (m.salim@globaldata.com). Mention Subject Line :-Applying for Python - (Web Scraping) @ GlobalData(Hyd) Share your details in the mail :- Full Name : Mobile # : Qualification : Company Name : Designation : Total Work Experience Years : Current CTC : Expected CTC : Notice Period : Current Location/willing to relocate to Hyd? : Job Description: We are seeking a highly skilled and motivated Python Web Crawling Developer with 3 to 8+ years of hands-on experience in web scraping and data extraction. The ideal candidate should have a solid background in Python-based scraping tools and libraries, and a proven track record of working on dynamic websites. Key Responsibilities: Design, develop, and maintain scalable web crawlers and scrapers. Extract and process data from various static and dynamic websites. Work with tools and libraries like Scrapy , Requests , BeautifulSoup , and Selenium . Maintain and optimize existing full-stack web scraping applications. Write and maintain automation scripts for data gathering and processing. Ensure data quality, performance, and reliability of scraping solutions. Collaborate with cross-functional teams for integration and analytics use cases. Required Skills: Strong proficiency in Python programming. In-depth experience with web scraping frameworks (Scrapy, Requests, BeautifulSoup, Selenium). Experience in handling dynamic content scraping (JavaScript-rendered pages). Working knowledge of SQL and database programming. Good analytical and problem-solving skills. Strong verbal and written communication skills. Experience in Python automation scripting. Good to Have: Exposure to Natural Language Processing (NLP) tools like NLTK . Basic understanding or hands-on experience with Machine Learning . Experience with Python web frameworks like Django , Flask , or FastAPI . Thanks & Regards, Salim (Human Resources)

Posted 3 weeks ago

Apply

4.0 - 8.0 years

3 - 5 Lacs

Noida

Work from Office

We are seeking a skilled and detail-oriented Web Scraping Engineer to design, develop, and maintain scalable data scraping systems. Y

Posted 3 weeks ago

Apply

2.0 - 4.0 years

2 - 4 Lacs

Mumbai, Maharashtra, India

On-site

What We re Looking For 2-4 years of experience in data engineering, data extraction, web scraping, or unstructured data processing. Strong proficiency in Python, Pandas/NumPy, Regex Text Processing, Shell scripting/Bash. Familiarity with web scraping tools or Beautiful Soup and data governance. Knowledge of frontend/ backend development (React, APIs, Python Flask or FastAPI, databases, cloud technologies) is a plus. Someone capable of working with unstructured or alternative data sources Competence in deploying solutions on Google Cloud Platform (GCP), particularly BigQuery and Cloud Functions along with experience with Snowflake for data modeling and performance tuning Experience working in a fast-paced environment with evolving priorities Effective communication and an ability to collaborate across technical and non-technical teams. Data product lifecycle management from acquisition to QC and delivery is a plus. Strong problem-solving skills with attention to detail and a proactive approach

Posted 3 weeks ago

Apply

2.0 - 5.0 years

1 - 2 Lacs

Bengaluru

Work from Office

Build and run scripts to scrape emails, phone numbers, and business data, clean and organize it, analyze insights using Python/Excel, automate workflows, and support lead generation for import-export operations.

Posted 1 month ago

Apply

2.0 - 4.0 years

5 - 7 Lacs

Mumbai

Work from Office

Develop and maintain Python-based applications, with a focus on Flask for web development. Collaborate with cross-functional teams to understand project requirements and translate Bachelors or Master's degree in Computer Science, Data Science, or a related field. 2+ years of professional experience in Python development and data science. Strong proficiency in Python programming languagewith Flask framework and familiarity with relational databases (e.g., MySQL). Proficiency in handling and manipulating various types of data, including structured and unstructured data, using Python libraries such as Pandas, NumPy, and Beautiful Soup. Apply machine-learning techniques to analyse and extract insights from large text datasets, including social media data, customer feedback, and user interactions, to inform business decisions and strategy. Knowledge of machine learning techniques and libraries (e.g., scikit-learn, TensorFlow). Familiarity with creating and managing projects involving language models such as OpenAI's GPT (Generative Pre-trained Transformer) series, including ChatGPT and other prompt engineering tasks. Use models for LLMs and related tasks to enhance Chabots, virtual assistants, and other conversational AI applications, improving natural language understanding, conversation flow, and response generation. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities them into technical solutions. Design, implement, and maintain data pipelinesfor collecting, processing, and analysing large datasets. Perform exploratory data analysis to identify trends, patterns, and insights. Build machine learning models and algorithms to solve business problems and optimize processes. Deploy and monitor data science solutions in production environments. Conduct code reviews, testing, and debugging to ensure the quality and reliability of software applications. Stay updated with the latest trends and advancements in Python development, data science, and machine learning.

Posted 1 month ago

Apply

7.0 - 12.0 years

12 - 22 Lacs

Bengaluru

Remote

Role & responsibilities As a Data Engineer focused on web crawling and platform data acquisition, you will design, develop, and maintain large-scale web scraping pipelines to extract valuable platform data. You will be responsible for implementing scalable and resilient data extraction solutions, ensuring seamless data retrieval while working with proxy management, anti-bot bypass techniques, and data parsing. Optimizing scraping workflows for performance, reliability, and efficiency will be a key part of your role. Additionally, you will ensure that all extracted data maintains high quality and integrity. Preferred candidate profile We are seeking candidates with: Strong experience in Python and web scraping frameworks such as Scrapy, Selenium, Playwright, or BeautifulSoup. Knowledge of distributed web crawling architectures and job scheduling. Familiarity with headless browsers, CAPTCHA-solving techniques, and proxy management to handle dynamic web challenges. Experience with data storage solutions, including SQL, and cloud storage. Understanding of big data technologies like Spark and Kafka (a plus). Strong debugging skills to adapt to website structure changes and blockers. A proactive, problem-solving mindset and ability to work effectively in a team-driven environment.

Posted 1 month ago

Apply

2.0 - 3.0 years

6 - 8 Lacs

Noida

Work from Office

About Us: LdotR is an online brand protection service company, offering businesses the right solution and services to protect, manage and benefit from their digital assets in the online space. We work across all digital platforms - Domains, Website, Social Media, Online Marketplaces, and App Stores to identify, assess and nullify brand infringements. About the Role: We are looking for an experienced Data Scraping Specialist to help us extract and structure data from leading social media platforms at scale. The ideal candidate will have hands-on expertise with scraping tools, APIs, and large-scale data processing. Key Responsibilities: Design and develop custom scraping solutions to extract public data from platforms like Instagram, Facebook, X (Twitter), LinkedIn, YouTube, etc. Handle large-scale scraping tasks with efficiency and resilience against rate-limiting and platform-specific restrictions. Clean, normalize, and structure the scraped data for analysis or downstream applications. Maintain scraping scripts to adapt to frequent platform changes. Ensure compliance with data protection policies and terms of service. Required Skills: Proficiency in Python and scraping libraries (e.g., Scrapy, BeautifulSoup, Selenium, Playwright). Experience with API integration (official or unofficial social media APIs). Familiarity with rotating proxies, headless browsers, and CAPTCHA-solving techniques. Strong understanding of data structuring formats like JSON, CSV, and databases (MongoDB, PostgreSQL, etc.). Experience with cloud-based scraping and storage solutions (AWS/GCP preferred). Good to Have: Knowledge of NLP or data analytics for social media sentiment or trend analysis. Understanding of GDPR and CCPA compliance. Prior work with third-party scraping platforms or browser automation tools. What We Offer: Opportunity to work on impactful, large-scale data projects. Flexible work arrangements. Competitive compensation based on experience and delivery.

Posted 1 month ago

Apply

3.0 - 7.0 years

1 - 2 Lacs

Thane, Navi Mumbai, Mumbai (All Areas)

Work from Office

Key Responsibilities: Develop and maintain automated web scraping scripts using Python libraries such as Beautiful Soup, Scrapy, and Selenium. Optimize scraping pipelines for performance, scalability, and resource efficiency. Handle dynamic websites, CAPTCHA-solving, and implement IP rotation techniques for uninterrupted scraping. Process and clean raw data, ensuring accuracy and integrity in extracted datasets. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Leverage APIs when web scraping is not feasible, managing authentication and request optimization. Document processes, pipelines, and troubleshooting steps for maintainable and reusable scraping solutions. Ensure compliance with legal and ethical web scraping practices, implementing security safeguards. Requirements: Education : Bachelors degree in Computer Science, Engineering, or a related field. Experience : 2+ years of Python development experience, with at least 1 year focused on web scraping. Technical Skills : Proficiency in Python and libraries like Beautiful Soup, Scrapy, and Selenium. Experience with regular expressions (Regex) for data parsing. Strong knowledge of HTTP protocols, cookies, headers, and user-agent rotation. Familiarity with databases (SQL and NoSQL) for storing scraped data. Hands-on experience with data manipulation libraries such as pandas and NumPy. Experience working with APIs and managing third-party integrations. Familiarity with version control systems like Git. Bonus Skills : Knowledge of containerization tools like Docker. Experience with distributed scraping solutions and task queues (e.g., Celery, RabbitMQ). Basic understanding of data visualization tools. Non-Technical Skills : Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and collaboratively in a team environmen

Posted 1 month ago

Apply

3.0 - 7.0 years

1 - 2 Lacs

Mumbai, Thane, Navi Mumbai

Work from Office

Key Responsibilities: Develop and maintain automated web scraping scripts using Python libraries such as BeautifulSoup, Scrapy, and Selenium. Optimize scraping pipelines for performance, scalability, and resource efficiency. Handle dynamic websites, CAPTCHA-solving, and implement IP rotation techniques for uninterrupted scraping. Process and clean raw data, ensuring accuracy and integrity in extracted datasets. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Leverage APIs when web scraping is not feasible, managing authentication and request optimization. Document processes, pipelines, and troubleshooting steps for maintainable and reusable scraping solutions. Ensure compliance with legal and ethical web scraping practices, implementing security safeguards. Requirements: Education : Bachelors degree in Computer Science, Engineering, or a related field. Experience : 2+ years of Python development experience, with at least 1 year focused on web scraping. Technical Skills : Proficiency in Python and libraries like BeautifulSoup, Scrapy, and Selenium. Experience with regular expressions (Regex) for data parsing. Strong knowledge of HTTP protocols, cookies, headers, and user-agent rotation. Familiarity with databases (SQL and NoSQL) for storing scraped data. Hands-on experience with data manipulation libraries such as pandas and NumPy. Experience working with APIs and managing third-party integrations. Familiarity with version control systems like Git. Bonus Skills : Knowledge of containerization tools like Docker. Experience with distributed scraping solutions and task queues (e.g., Celery, RabbitMQ). Basic understanding of data visualization tools. Non-Technical Skills : Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and collaboratively in a team environment. CANDIDATES AVAILABLE FOR FACE-TO-FACE INTERVIEWS ARE PREFERRED.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies