Home
Jobs

77 Scrapy Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

17 - 20 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description: This position is for a senior software engineer who is passionate about solving challenging business problems and has - ✓ Strong experience in developing ETL pipeline using SQL, T-SQL, stored procedures, SQL server, performance tuning of queries and stored procedures. ✓ Experience in troubleshooting and resolving database integrity issues, performance issues, deadlock issues, connectivity issues, etc. ✓ Hands-on and strong knowledge of data frames and pandas in Python. Reading data from excel, CSV, SQL, and JSON and writing to SQL and JSON. Sql ✓ Excellent communication skills and client-facing experience. Strong experience in DDL and DML commands, UNIONS, JOINS, Subqueries, execution plan, CTE, Pivot/unpivot and indexes. Strong experience in sorting & filtering data, grouping & functions for data processing (Aggregation functions, ranking functions…etc.). Strong experience in creating tables, views, triggers, partitions, complex stored procedures, functions, indexes, and other database objects for data extraction loading and transformations. Performing routine data quality checks and validations to ensure data integrity and consistency. Python Strong Proficiency in Python Programming: hands-on experience. JSON Data Parsing: Extensive experience in handling JSON data formats and implementing parsing techniques. RESTful API & Google Analytics Integration: Skilled in development, integration, and optimization. Secure File Transfer Protocols: Proficient in working with SFTP for secure data transfers. Web Scraping: Experience using tools like Beautiful Soup, Scrapy, and Selenium for data extraction. Python Libraries: Familiar with libraries like NumPy and Pandas for data manipulation and analysis. Performance Optimization: Expertise in debugging and optimizing code for scalability and efficiency. Machine Learning Models (Added Advantage): Experience in developing, training, and deploying machine learning models for predictive analytics. Skills: sql server,scrapy,t-sql,etl pipeline development,sql,pandas,performance tuning,etl development,numpy,sftp,machine learning,selenium,python,data parsing,beautiful soup,stored procedures,data frames,restful api,google analytics integration,rest api integration,web scraping

Posted 1 day ago

Apply

8.0 - 12.0 years

9 - 19 Lacs

Gandhinagar, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Job Summary We are seeking a highly skilled and experienced Senior Data Scraping Engineer to design, develop, and orchestrate robust web scraping frameworks. The ideal candidate will have 8-10 years of experience in ethical web scraping, including navigating login-protected websites, solving CAPTCHAs, and managing proxies or third-party services. You will be responsible for building scalable, efficient, and compliant scraping pipelines using industry-standard programming languages and tools, ensuring data integrity and adherence to legal and ethical guidelines. Key Responsibilities Framework Development: Design and implement end-to-end web scraping frameworks to extract structured data from diverse web sources, including those requiring authentication (e.g., behind logins). CAPTCHA Handling: Develop and integrate solutions to bypass or solve CAPTCHAs (e.g., reCAPTCHA, hCaptcha) using ethical tools, services, or machine learning techniques. Proxy & Service Management: Configure and manage proxy services (e.g., rotating proxies, residential proxies) and third-party APIs (e.g., CAPTCHA-solving services) to ensure uninterrupted and anonymous scraping operations. Ethical Compliance: Ensure all scraping activities comply with website terms of service, data privacy regulations (e.g., GDPR, CCPA), and industry best practices for ethical data collection. Data Quality & Validation: Implement robust data validation and cleaning processes to ensure the accuracy, completeness, and consistency of scraped data. Scalability & Optimization: Build scalable scraping pipelines capable of handling large volumes of data with optimized performance, minimal latency, and efficient resource utilization. Monitoring & Maintenance: Develop monitoring tools to track scraping performance, detect failures (e.g., IP bans, structural changes in websites), and maintain scraping scripts to adapt to website updates. Collaboration: Work closely with data engineers, analysts, and product teams to understand data requirements and deliver high-quality datasets for downstream applications. Documentation: Maintain comprehensive documentation for scraping workflows, tools, and processes to ensure transparency and reproducibility. Required Qualifications Experience: 8-10 years of professional experience in web scraping, data extraction, or related fields, with a proven track record of handling complex scraping projects. Programming Languages: - Primary: Proficiency in Python (e.g., Scrapy, BeautifulSoup, Selenium, Requests) for building scraping scripts and frameworks. - Secondary (Preferred): Familiarity with JavaScript/Node.js (e.g., Puppeteer, Cheerio) for dynamic website scraping or Go for high-performance tasks. Tools & Technologies: - Scraping Frameworks: Expertise in Scrapy, Selenium, Puppeteer, or equivalent tools for scraping static and dynamic web content. - CAPTCHA Solutions: Experience with CAPTCHA-solving services (e.g., 2Captcha, Anti- CAPTCHA) or custom ML-based solutions. - Proxy Management: Hands-on experience with proxy services like Bright Data, Oxylabs, Smartproxy, or ScrapingBee for IP rotation and anonymity. - Headless Browsers: Proficiency in using headless browsers (e.g., Chrome, Firefox) for scraping JavaScript-heavy websites. - Databases: Knowledge of SQL (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB) for storing and querying scraped data. - Cloud Platforms (Preferred): Familiarity with AWS, GCP, or Azure for deploying scraping pipelines or managing infrastructure. Orchestration & Automation: - Experience with workflow orchestration tools like Apache Airflow, Prefect, or Celery for scheduling and managing scraping tasks. - Knowledge of containerization (e.g., Docker) and CI/CD pipelines for deploying scraping scripts. Ethical & Legal Knowledge: Strong understanding of web scraping ethics, website terms of service, and data privacy regulations (e.g., GDPR, CCPA). Problem-Solving: Exceptional ability to troubleshoot issues like IP bans, rate limits, and website structural changes. Communication: Strong verbal and written communication skills to collaborate with cross-functional teams and document processes effectively. Preferred Qualifications Experience with machine learning or AI-based techniques for CAPTCHA solving or dynamic content extraction.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

eGrove Systems Pvt Ltd is looking for Senior Python Developer to join its team of experts. Skill: - Senior Python Developer Exp: 4+Yrs NP: Immediate to 15days Location: Chennai/Madurai Interested candidates can send your resume to annie@egrovesys.com Skills Requirement: - Hands-on software development skills, deep technical expertise across the entire software delivery process. Forward-thinking skilled individual. Structured, organized, and a good communicator. Write reusable, Testable, and Efficient code. Required Skills: - 3+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company: - eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, start-ups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.

Posted 1 day ago

Apply

1.0 - 5.0 years

10 - 14 Lacs

Mumbai

Work from Office

Naukri logo

We are seeking an experienced and motivated Data Scraper / Lead Generator to join our fast-growing team in Mumbai. The ideal candidate will have a strong background in generating leads through web scraping and online research, specifically targeting the Europe, UK, USA and other international markets.. Key Responsibilities:. Conduct in-depth online research to identify potential leads in targeted geographies. Use advanced web scraping tools and techniques to extract accurate contact and business data from various sources.. Validate and verify collected data to ensure quality and relevance.. Maintain and manage a structured database of leads for outreach and tracking.. Collaborate closely with the sales and marketing teams to deliver a steady pipeline of high-quality leads.. Stay up to date with industry trends, tools, and best practices in data scraping and lead generation.. Requirements:. Proven experience in data scraping lead generation, especially in international markets (UK preferred).. Proficiency in web scraping tools and methods (e.g., Python/BeautifulSoup, Scrapy, Octoparse, or similar).. Strong attention to detail, organizational skills, and data accuracy.. Ability to manage time efficiently and handle multiple tasks.. Excellent communication and coordination skills.. Preferred:. Immediate availability or short notice period.. Show more Show less

Posted 5 days ago

Apply

2.0 - 4.0 years

25 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

Client : Our client is a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. Requirements : Our client is looking for Python Web Scraper, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities : - Design and build scalable, reliable web scraping solutions using Python/PySpark. - Develop enterprise-grade scraping services that are robust, fault-tolerant, and production-ready. - Work with large volumes of structured and unstructured data; parse, clean, and transform as required. - Implement robust data validation and monitoring processes to ensure accuracy, consistency, and availability. - Write clean, modular code with proper logging, retries, error handling, and documentation. - Automate repetitive scraping tasks and optimize data workflows for performance and scalability. - Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. - Analyze and identify data sources relevant to business needs. - Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Desired Profile : - Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. - 2-4 years of experience in web scraping, data crawling, or data engineering. - Proficiency in Python with web scraping tools and libraries (e.g., Beautiful Soup, Scrapy, or Selenium). - Basic working knowledge of PySpark and data pipelines. - Experience with cloud-based platforms (AWS, Google Cloud, Azure) and familiarity with cloud-native data tools like Apache Airflow and EMR. - Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). - Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). - Familiarity with version control systems like Git.

Posted 6 days ago

Apply

3.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Linkedin logo

Job Title: MERN stack Developer Location: Mohali, India (On-site) Experience Required: Minimum 3 Years Job Summary: We are seeking a highly skilled Software Engineer with hands-on experience in the MERN stack (MongoDB, Express.js, React.js, Node.js) and strong scripting and automation capabilities using Python. The ideal candidate should be comfortable working on server deployments (VPS), automation tasks, and have a solid understanding of cloud services and DevOps tools. Key Responsibilities: Develop and maintain scalable web applications using the MERN stack. Write and maintain scripts for automation tasks using Python and relevant frameworks. Manage server deployments on VPS environments, ensuring performance, uptime, and security. Work with Git for version control and collaborative development. Collaborate with the team to build, test, and deploy new features quickly and efficiently. Monitor and improve backend performance. Bonus: Contribute to cloud integration and containerization using Docker, Azure, or Kubernetes. Required Skills: Strong expertise in Node.js and backend logic. Solid experience with MongoDB, Express.js Proficient in Python and automation frameworks/libraries (e.g., Selenium, Requests, Scrapy, Django, etc.). Experience with VPS setup, server monitoring, and configuration. Good understanding of Git and working with version control systems. Familiarity with REST APIs and webhooks. Preferred/Bonus Skills: Knowledge of Docker, Azure, Kubernetes, or other cloud technologies. Experience with CI/CD pipelines. Basic Linux server management and shell scripting Show more Show less

Posted 6 days ago

Apply

4.0 years

0 - 0 Lacs

Mohali

On-site

GlassDoor logo

Job Description : Should have 4+ years hands-on experience in algorithms and implementation of analytics solutions in predictive analytics, text analytics and image analytics Should have handson experience in leading a team of data scientists, works closely with client’s technical team to plan, develop and execute on client requirements providing technical expertise and project leadership. Leads efforts to foster innovative ideas for developing high impact solutions. Evaluates and leads broad range of forward looking analytics initiatives, track emerging data science trends, and knowledge sharing Engaging key stakeholders to source, mine and validate data and findings and to confirm business logic and assumptions in order to draw conclusions. Helps in design and develop advanced analytic solutions across functional areas as per requirement/opportunities. Technical Role and Responsibilities Demonstrated strong capability in statistical/Mathematical modelling or Machine Learning or Artificial Intelligence Demonstrated skills in programming for implementation and deployment of algorithms preferably in Statistical/ML based programming languages in Python Sound Experience with traditional as well as modern statistical techniques, including Regression, Support Vector Machines, Regularization, Boosting, Random Forests, and other Ensemble Methods; Visualization tool experience - preferably with Tableau or Power BI Sound knowledge of ETL practices preferably spark in Data Bricks cloud big data technologies like AWS, Google, Microsoft, or Cloudera. Communicate complex quantitative analysis in a lucid, precise, clear and actionable insight. Developing new practices and methodologies using statistical methods, machine learning and predictive models under mentorship. Carrying out statistical and mathematical modelling, solving complex business problems and delivering innovative solutions using state of the art tools and cutting-edge technologies for big data & beyond. Preferred to have Bachelors/Masters in Statistics/Machine Learning/Data Science/Analytics Should be a Data Science Professional with a knack for solving problems using cutting-edge ML/DL techniques and implementing solutions leveraging cloud-based infrastructure. Should be strong in GCP, TensorFlow, Numpy, Pandas, Python, Auto ML, Big Query, Machine learning, Artificial intelligence, Deep Learning Exposure to below skills: Preferred Tech Skills : Python, Computer Vision,Machine Learning,RNN,Data Visualization,Natural Language Processing,Voice Modulation,Speech to text,Spicy,Lstm,Object Detection,Sklearn,Numpy, NLTk,Matplotlib,Cuinks, seaborn,Imageprocessing, NeuralNetwork,Yolo, DarkFlow,DarkNet,Pytorch, CNN,Tensorflow,Keras,Unet, ImageSegmentation,ModeNet OCR,OpenCV,Pandas,Scrapy, BeautifulSoup,LabelImg ,GIT. Machine Learning, Deep Learning, Computer Vision, Natural Language Processing,Statistics Programming Languages-Python Libraries & Software Packages- Tensorflow, Keras, OpenCV, Pillow, Scikit-Learn, Flask, Numpy, Pandas, Matplotlib,Docker Cloud Services- Compute Engine, GCP AI Platform, Cloud Storage, GCP AI & MLAPIs Job Types: Full-time, Permanent, Fresher Pay: ₹30,000.00 - ₹80,000.00 per month Education: Bachelor's (Preferred) Experience: Machine learning: 4 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

6.0 - 10.0 years

25 - 35 Lacs

Gurugram

Remote

Naukri logo

Hi, With reference to your profile on job portal we would like to share an opportunity with you for one of our Gurgaon Based client for Gurgaon location. Please find below the details regarding same: Location: Remote/WFH Experience: 6-10 Years Title: Manager-Data Engineer (Web Scraping) Notice Period: Only Immediate Joiner - 30 Days Max Job Responsibilities Technical Skills Required: Proficiency in Python and SQL/Database skills is required. Must have strong expertise in using Pandas library (Python). Experience with web technologies (HTML/JS, APIs, etc.) is essential. Should have a good understanding of tools such as Scrapy, BeautifulSoup, and Selenium. Responsible for reviewing and approving pull requests to ensure clean, maintainable, and efficient code. Experience building scalable scraping solutions for large-scale data collection Familiarity with AWS technologies like S3, RDS, SNS, SQS, Lambda, and others is necessary. Qualifications Bachelors/masters degree in computer science or in any related field. Role Summary Leading and mentoring a team of seasoned Data engineers performing Web Scraping using various scraping techniques and then utilizing Pythons Pandas library for data cleaning and manipulation. Then ingesting the data into a Database/Warehouse, and scheduling the scrapers using Airflow or other tools Role Overview The Web Scraping Team is seeking a creative and detail-oriented Leaders to contribute to client projects and lead by examples. This team develops essential applications, datasets, and alerts that directly support client investment decisions. Our focus is to maintain operational excellence by providing high-quality proprietary datasets, timely notifications, and exceptional service. The ideal candidate will be self-motivated, self-sufficient, and possess a passion for tinkering and a love for automation. If in case you are interested to avail this opportunity then please revert with your updated profile asap to sachin@vanassociates.com Note: Do not change the subject line while reverting. 1. Total Exp: 2. Relevant experience in Python, Pandas, Data Cleansing, Data Transformation, Team Management: 3. Current CTC: 4. Expected CTC: 5. Official Notice Period: 6. Ready to work in Gurgaon: 7. Availability for MS Teams Interviews in Weekdays:

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Be a part of India’s largest and most admired news network! Network18 is India's most diversified Media Company in the fast growing Media market. The Company has a strong Heritage and we possess a strong presence in Magazines, Television and Internet domains. Our brands like CNBC, Forbes and Moneycontrol are market leaders in their respective segments. The Company has over 7,000 employees across all major cities in India and has been consistently managed to stay ahead of the growth curve of the industry. Network 18 brings together employees from varied backgrounds under one roof united by the hunger to create immersive content and ideas. We take pride in our people, who we believe are the key to realizing the organization’s potential. We continually strive to enable our employees to realize their own goals, by providing opportunities to learn, share and grow. Role Overview: We are seeking a passionate and skilled Data Scientist with over a year of experience to join our dynamic team. You will be instrumental in developing and deploying machine learning models, building robust data pipelines, and translating complex data into actionable insights. This role offers the opportunity to work on cutting-edge projects involving NLP, Generative AI, data automation, and cloud technologies to drive business value. Key Responsibilities: Design, develop, and deploy machine learning models, with a strong focus on NLP (including advanced techniques and Generative AI) and other AI applications. Build, maintain, and optimize ETL pipelines for automated data ingestion, transformation, and standardization from various sources Work extensively with SQL for data extraction, manipulation, and analysis in environments like BigQuery. Develop solutions using Python and relevant data science/ML libraries (Pandas, NumPy, Hugging Face Transformers, etc.). Utilize Google Cloud Platform (GCP) services for data storage, processing, and model deployment. Create and maintain interactive dashboards and reporting tools (e.g., Power BI) to present insights to stakeholders. Apply basic Docker concepts for containerization and deployment of applications. Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. Stay abreast of the latest advancements in AI/ML and NLP best practices. Required Qualifications & Skills: 2 to 5 years of hands-on experience as a Data Scientist or in a similar role. Solid understanding of machine learning fundamentals, algorithms, and best practices. Proficiency in Python and relevant data science libraries. Good SQL skills for complex querying and data manipulation. Demonstrable experience with Natural Language Processing (NLP) techniques, including advanced models (e.g., transformers) and familiarity with Generative AI concepts and applications. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications & Skills: Familiarity and hands-on experience with Google Cloud Platform (GCP) services, especially BigQuery, Cloud Functions, and Vertex AI. Basic understanding of Docker and containerization for deploying applications. Experience with dashboarding tools like Power BI and building web applications with Streamlit. Experience with web scraping tools and techniques (e.g., BeautifulSoup, Scrapy, Selenium). Knowledge of data warehousing concepts and schema design. Experience in designing and building ETL pipelines. Disclaimer: Please note Network18 and related group companies do not use the services of vendors or agents for recruitment. Please beware of such agents or vendors providing assistance. Network18 will not be responsible for any losses incurred. “We correspond only from our official email address” Show more Show less

Posted 1 week ago

Apply

6.0 - 10.0 years

25 - 35 Lacs

Gurugram

Remote

Naukri logo

Hi, With reference to your profile on job portal we would like to share an opportunity with you for one of our Gurgaon Based client for Gurgaon location. Please find below the details regarding same: Location: Remote/WFH Experience: 6-10 Years Title: Manager-Data Engineer (Web Scraping) Notice Period: Only Immediate Joiner - 30 Days Max Job Responsibilities Technical Skills Required: Proficiency in Python and SQL/Database skills is required. Must have strong expertise in using Pandas library (Python). Experience with web technologies (HTML/JS, APIs, etc.) is essential. Should have a good understanding of tools such as Scrapy, BeautifulSoup, and Selenium. Responsible for reviewing and approving pull requests to ensure clean, maintainable, and efficient code. Experience building scalable scraping solutions for large-scale data collection Familiarity with AWS technologies like S3, RDS, SNS, SQS, Lambda, and others is necessary. Qualifications Bachelors/masters degree in computer science or in any related field. Role Summary Leading and mentoring a team of seasoned Data engineers performing Web Scraping using various scraping techniques and then utilizing Pythons Pandas library for data cleaning and manipulation. Then ingesting the data into a Database/Warehouse, and scheduling the scrapers using Airflow or other tools Role Overview The Web Scraping Team is seeking a creative and detail-oriented Leaders to contribute to client projects and lead by examples. This team develops essential applications, datasets, and alerts that directly support client investment decisions. Our focus is to maintain operational excellence by providing high-quality proprietary datasets, timely notifications, and exceptional service. The ideal candidate will be self-motivated, self-sufficient, and possess a passion for tinkering and a love for automation. If in case you are interested to avail this opportunity then please revert with your updated profile asap to dbetal@vanassociates.com Note: Do not change the subject line while reverting. 1. Total Exp: 2. Relevant experience in Python, Pandas, Data Cleansing, Data Transformation, Team Management: 3. Current CTC: 4. Expected CTC: 5. Official Notice Period: 6. Ready to work in Gurgaon: 7. Availability for MS Teams Interviews in Weekdays: Thanks & Regards, Devanshu Betal Vanguard HR Associates Pvt. Ltd. Phone No- +918851081580 E-Mail:- dbetal@vanassociates.com

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role : Senior Python Locations : Teynampet, Chennai/Kk Nagar, from Office : (1pm to 10pm) Monday of Interview : : 3+ Skills : 3+ years of Strong experience in Python & 2 years in Django Web framework Experience or Knowledge in implementing various Design Patterns Good Understanding of MVC framework & Object-Oriented Programming Experience in PGSQL / MySQL and MongoDB Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc. Experience developing in a Linux environment, GIT & Agile methodology Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS Good to have experience in implementing charts, graphs using various libraries Good to have experience in Multi-Threading, REST API management (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

3.0 - 7.0 years

12 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Roles and Responsibilities Design, develop, test, and deploy scalable web applications using Python frameworks such as Django or Flask. Collaborate with cross-functional teams to identify requirements and implement solutions that meet business needs. Ensure high-quality code by writing unit tests, integrating with databases like MongoDB, and good to have implementing data science concepts using Numpy, Pandas, Scrapy etc. Participate in Agile development methodologies to deliver projects on time and within budget. Troubleshoot issues related to application performance optimization and scalability. Desired Candidate Profile 3-7 years of experience in software development with expertise in Python programming language. Added advantage if have good understanding of full-stack development principles including front-end technologies like HTML/CSS/Javascript. Proficiency in at least one Python framework (Django or Flask) along with knowledge of RESTful API design principles. Experience working with databases like MySQL and MongoDB; familiarity with Data Science libraries like Numpy/Pandas/Scrapy is an added advantage. We are looking for immediate joinee and Interest are requested to send their resume to vishal.d@techraq.com.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Designation - Data Scraping Location - Ahmedabad Years of Experience - 3+ years We are seeking a skilled and motivated Web Crawler to join our team. The ideal candidate will have 3 to 5 years of experience in developing and maintaining robust web scraping solutions. You will be responsible for designing, implementing, and optimizing web crawlers to extract valuable data from diverse online sources. This role requires a strong understanding of web technologies, data handling, and problem-solving skills. Responsibilities: •Design, develop, and maintain efficient and scalable web crawlers using Python, Mozenda, etc. •Utilize web scraping frameworks such as Scrapy, Beautiful Soup, or Selenium to extract data from websites. •Implement and optimize data extraction logic using XPath, CSS selectors, and JSONPath. •Understand and effectively navigate website structures and implement strategies to bypass anti-scraping measures. •Test, maintain, and troubleshoot web scraping processes to identify and resolve any issues or errors. •Ensure data integrity and quality through rigorous testing and validation. •Monitor and troubleshoot crawler performance, identifying and resolving complex technical issues. •Work with SQL and NoSQL databases to store and manage extracted data. •Collaborate with cross-functional teams to define data requirements and deliver actionable insights. •Maintain comprehensive documentation for all crawler development and maintenance activities. •Demonstrate a strong understanding of the HTTP protocol and web technologies. Show more Show less

Posted 1 week ago

Apply

5.0 years

7 - 15 Lacs

Ahmedabad

On-site

GlassDoor logo

We are accepting applications for experienced Data Engineer with a strong background in data scraping, cleaning, transformation, and automation. The ideal candidate will be responsible for building robust data pipelines, maintaining data integrity, and generating actionable dashboards and reports to support business decision-making. Key Responsibilities: Develop and maintain scripts for scraping data from various sources including APIs, websites, and databases. Perform data cleaning, transformation, and normalization to ensure consistency and usability across all data sets. Design and implement relational and non-relational data tables and frames for scalable data storage and analysis. Build automated data pipelines to ensure timely and accurate data availability. Create and manage interactive dashboards and reports using tools such as Power BI, Tableau, or similar platforms. Write and maintain data automation scripts to streamline ETL (Extract, Transform, Load) processes. Ensure data quality, governance, and compliance with internal and external regulations. Monitor and optimize the performance of data workflows and pipelines. Qualifications & Skills: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Minimum of 5 years of experience in a data engineering or similar role. Proficient in Python (especially for data scraping and automation), and strong hands-on experience with Pandas, NumPy , and other data manipulation libraries. Experience with web scraping tools and techniques (e.g., BeautifulSoup, Scrapy, Selenium). Strong SQL skills and experience working with relational databases (e.g., PostgreSQL, MySQL) and data warehouses (e.g., Redshift, Snowflake, BigQuery). Familiarity with data visualization tools like Power BI, Tableau, or Looker. Knowledge of ETL tools and orchestration frameworks such as Apache Airflow, Luigi, or Prefect . Experience with version control systems like Git and collaborative platforms like Jira or Confluence . Strong understanding of data security, privacy , and governance best practices. Excellent problem-solving skills and attention to detail. Preferred Qualifications: Experience with cloud platforms such as AWS, GCP, or Azure. Familiarity with NoSQL databases like MongoDB, Cassandra, or Elasticsearch. Understanding of CI/CD pipelines and DevOps practices related to data engineering. Job Type: Full-Time (In-Office) Work Days: Monday to Saturday Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹1,500,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

About the Role: We are looking for a skilled Python Developer with strong expertise in web scraping and data extraction. You will be responsible for designing and maintaining scalable scraping systems, handling large volumes of data, and ensuring the accuracy and integrity of data from various online sources. Responsibilities: Develop and maintain Python scripts for scraping structured and unstructured data from websites and APIs. Build robust, scalable, and efficient scraping solutions using libraries such as BeautifulSoup, Scrapy, Selenium, or Playwright. Monitor and optimize scraping performance and manage data pipelines. Handle website structure changes, anti-bot protections, and CAPTCHA bypassing when necessary. Store, clean, and normalize scraped data using databases (e.g., PostgreSQL, MongoDB) or cloud solutions. Collaborate with data analysts, engineers, and product managers to define data needs and deliver insights. Ensure compliance with legal and ethical standards of data collection. Required Skills: Strong proficiency in Python, especially in web scraping. Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and browser behavior. Familiarity with RESTful APIs, JSON, and XML. Experience working with databases (SQL or NoSQL). Basic knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker) is a plus. Preferred Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Experience handling large-scale scraping projects. Background in using version control systems (e.g., Git). Understanding of data privacy laws (e.g., GDPR, CCPA). Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Health insurance Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

We are looking for a skilled Python Developer with strong expertise in web scraping and data extraction. You will be responsible for designing and maintaining scalable scraping systems, handling large volumes of data, and ensuring data accuracy and integrity from various online sources. Responsibilities: Develop and maintain Python scripts for scraping structured and unstructured data from websites and APIs. Build robust, scalable, and efficient scraping solutions using libraries such as BeautifulSoup, Scrapy, Selenium, or Playwright. Monitor and optimize scraping performance and manage data pipelines. Handle website structure changes, anti-bot protections, and CAPTCHA bypassing when necessary. Store, clean, and normalize scraped data using databases (e.g., PostgreSQL, MongoDB) or cloud solutions. Collaborate with data analysts, engineers, and product managers to define data needs and deliver insights. Ensure compliance with legal and ethical standards of data collection. Required Skills: Strong proficiency in Python, especially in web scraping. Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and browser behavior. Familiarity with RESTful APIs, JSON, and XML. Experience working with databases (SQL or NoSQL). Basic knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker) is a plus. Preferred Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Experience handling large-scale scraping projects. Background in using version control systems (e.g., Git). Understanding of data privacy laws (e.g., GDPR, CCPA). Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Health insurance Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

About the Role: We are looking for a skilled Python Developer with strong expertise in web scraping and data extraction. You will be responsible for designing and maintaining scalable scraping systems, handling large volumes of data, and ensuring the accuracy and integrity of data from various online sources. Responsibilities: Develop and maintain Python scripts for scraping structured and unstructured data from websites and APIs. Build robust, scalable, and efficient scraping solutions using libraries such as BeautifulSoup, Scrapy, Selenium, or Playwright. Monitor and optimize scraping performance and manage data pipelines. Handle website structure changes, anti-bot protections, and CAPTCHA bypassing when necessary. Store, clean, and normalize scraped data using databases (e.g., PostgreSQL, MongoDB) or cloud solutions. Collaborate with data analysts, engineers, and product managers to define data needs and deliver insights. Ensure compliance with legal and ethical standards of data collection. Required Skills: Strong proficiency in Python, especially in web scraping. Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and browser behavior. Familiarity with RESTful APIs, JSON, and XML. Experience working with databases (SQL or NoSQL). Basic knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker) is a plus. Preferred Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Experience handling large-scale scraping projects. Background in using version control systems (e.g., Git). Understanding of data privacy laws (e.g., GDPR, CCPA). Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

1.0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

Job Summary: We are looking for a skilled Python Developer with strong expertise in web scraping and data extraction. You will be responsible for designing and maintaining scalable scraping systems, handling large volumes of data, and ensuring data accuracy and integrity from various online sources. Responsibilities: Develop and maintain Python scripts for scraping structured and unstructured data from websites and APIs. Build robust, scalable, and efficient scraping solutions using libraries such as BeautifulSoup, Scrapy, Selenium, or Playwright. Monitor and optimize scraping performance and manage data pipelines. Handle website structure changes, anti-bot protections, and CAPTCHA bypassing when necessary. Store, clean, and normalize scraped data using databases (e.g., PostgreSQL, MongoDB) or cloud solutions. Collaborate with data analysts, engineers, and product managers to define data needs and deliver insights. Ensure compliance with legal and ethical standards of data collection. Required Skills: · Strong proficiency in Python, especially in web scraping. · Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and browser behavior. · Familiarity with RESTful APIs, JSON, and XML. · Experience working with databases (SQL or NoSQL). · Basic knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker) is a plus. Preferred Qualifications: · Bachelor's degree in Computer Science, Information Technology, or related field. · Experience handling large-scale scraping projects. · Background in using version control systems (e.g., Git). · Understanding of data privacy laws (e.g., GDPR, CCPA). Job Type: Full-time Pay: ₹25,000.00 - ₹30,000.00 per month Benefits: Health insurance Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Experience: Data visualization: 1 year (Required) Work Location: In person

Posted 1 week ago

Apply

0.0 years

0 Lacs

Thiruvananthapuram, Kerala

On-site

Indeed logo

Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

Greater Delhi Area

Remote

Linkedin logo

ABOUT THE PYTHON DATA ENGINEER ROLE: We are looking for a skilled Python Data Engineer to join our team and work on building high-performance applications and scalable data solutions. In this role, you will be responsible for designing, developing, and maintaining robust Python-based applications, optimizing data pipelines, and integrating various APIs and databases. This is more than just a coding role—it requires strategic thinking, creativity, and a passion for data-driven decision-making to drive results and innovation. KEY RESPONSIBILITIES: Develop, test, and maintain efficient Python applications. Design, develop, and maintain ETL pipelines for efficient data extraction, transformation, and loading. Implement and integrate APIs, web scraping techniques, and database queries to extract data from various sources. Design and implement algorithms for data processing, transformation, and analysis. Write optimized SQL queries and work with relational databases to manage and analyse large datasets. Collaborate with cross-functional teams to understand technical requirements and deliver high-quality solutions. Ensure code quality, performance, and scalability through best practices and code reviews. Stay updated with the latest advancements in Python, data engineering, and backend development. REQUIRED QUALIFICATIONS: Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. 3–5+ years of hands-on experience as Data Engineer using Python Proficiency in Python frameworks and libraries such as Pandas, NumPy, and Scrapy. Experience with Data Visualization tools such as Power BI, Tableau Strong understanding of relational databases and SQL. Experience working with cloud platforms such as AWS Strong problem-solving skills with an analytical mindset. Excellent communication skills and the ability to work in a collaborative team environment. WHY JOIN US? Highly inclusive and collaborative culture built on mutual respect. Focus on core values, initiative, leadership, and adaptability. Strong emphasis on personal and professional development. Flexibility to work remotely and/or hybrid indefinitely. ABOUT WIN: Founded in 1993, WIN is a highly innovative proptech company revolutionizing the real estate industry with cutting-edge software platforms and products. With the stability and reputation of a 30-year legacy paired with the curiosity and agility of a start-up, we’ve been recognized as an Entrepreneur 500 company, one of the Fastest Growing Companies, and the Most Innovative Home Services Company. OUR CULTURE: Our colleagues are driven by curiosity and tinkering and a desire to make an impact. They enjoy a culture of high energy and collaboration where we listen to each other with empathy, experience personal and professional growth, and celebrate small victories and big accomplishments. Click here to learn more about our company and culture: https://www.linkedin.com/company/winhomeinspection/life Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

About the Role: We are looking for a hands-on Data Engineer to join our team and take full ownership of scraping pipelines and data quality. You'll be working on data from 60+ websites involving PDFs, processed via OCR and stored in MySQL/PostgreSQL. You’ll build robust, self-healing pipelines and fix common data issues (missing fields, duplication, formatting errors). Responsibilities: Own and optimize Airflow scraping DAGs for 60+ sites Implement validation checks, retry logic, and error alerts Build pre-processing routines to clean OCR'd text Create data normalization and deduplication workflows Maintain data integrity across MySQL and PostgreSQL Collaborate with ML team for downstream AI use cases Requirements: 2–5 years of experience in Python-based data engineering Experience with Airflow, Pandas, OCR (Tesseract or AWS Textract) Solid SQL and schema design skills (MySQL/PostgreSQL) Familiarity with CSV processing and data pipelines Bonus: Experience with scraping using Scrapy or Selenium Location: Delhi (in-office only) Salary Range : 50-80k/Month Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: AI/ML + Backend Developer (SEO Automation & Technical Implementation) Location: Remote (APAC preferred) | Full-Time or Contract 🔹About the Role We’re looking for a 3-7 years experienced technical powerhouse who blends backend engineering, AI/ML experience, and hands-on SEO implementation skills. This hybrid role will support our mission to scale intelligent SEO operations and automate key parts of our publishing workflows. You’ll build custom tools and systems that integrate machine learning, backend development, and SEO performance logic — from programmatic content generation and internal linking engines to technical audits, schema injection, and Google Search Console automations. 🔹What You'll Be Doing 🔧 Backend & Automation Development Build internal tools and APIs using Python or Node.js Automate content workflows (meta/gen content, redirects, schema, etc.) Integrate third-party APIs (GSC, Ahrefs, OpenAI, Gemini, Google Sheets) 🧠 AI/ML Workflows Apply NLP models for entity recognition, summarization, topic clustering Deploy and manage ML inference pipelines Work with LLMs to scale content enhancements (FAQs, headlines, refresh logic) ⚙️ SEO Automation & Technical Implementation Run and implement technical SEO audits (crawl issues, sitemaps, indexing, Core Web Vitals) Automate internal linking, canonical tags, redirects, structured data Use tools like Screaming Frog CLI, GSC API, and Cloudflare for scalable SEO execution 📈 Performance Monitoring Set up dashboards to monitor SEO KPIs and anomaly detection Build alerting systems for performance drops, crawl issues, or deindexed content 🔹 Key Skills Required Languages & Tools: Python (FastAPI, Pandas, Scrapy, etc.) and/or Node.js Databases (PostgreSQL, MongoDB, Redis) Docker, GitHub Actions, Cloud (GCP/AWS preferred) GSC API, Screaming Frog CLI, Google Sheets API OpenAI/Gemini API, LangChain or similar frameworks SEO Knowledge: Strong understanding of on-page and technical SEO Experience with programmatic content, schema markup, and CWV improvements Familiar with common issues like crawl depth, duplication, orphan pages, and indexability 🔹 Nice to Have Experience with content/media/publishing websites Familiarity with CI/CD and working in async product teams Exposure to headless CMS or WordPress API integrations Past experience automating large-scale content or SEO systems 🔹 What You'll Get The chance to work on large-scale content automation and modern SEO problems High autonomy, technical ownership, and visibility in decision-making Flexible remote work and performance-based incentives Direct collaboration with SEO strategy and editorial stakeholders . Show more Show less

Posted 2 weeks ago

Apply

5.0 - 7.0 years

6 - 8 Lacs

Kolkata

Remote

Naukri logo

Note: Please don't apply if you do not have at least 3 years of Scrapy experience. We are seeking a highly experienced Web Scraping Expert specialising in Scrapy-based web scraping and large-scale data extraction. This role is focused on building and optimizing web crawlers, handling anti-scraping measures, and ensuring efficient data pipelines for structured data collection. The ideal candidate will have 5+ years of hands-on experience developing Scrapy-based scraping solutions, implementing advanced evasion techniques, and managing high-volume web data extraction. You will collaborate with a cross-functional team to design, implement, and optimize scalable scraping systems that deliver high-quality, structured data for critical business needs. Key Responsibilities Scrapy-based Web Scraping Development Develop and maintain scalable web crawlers using Scrapy to extract structured data from diverse sources. Optimize Scrapy spiders for efficiency, reliability, and speed while minimizing detection risks. Handle dynamic content using middlewares, browser-based scraping (Playwright/Selenium), and API integrations. Implement proxy rotation, user-agent switching, and CAPTCHA solving techniques to bypass anti-bot measures. Advanced Anti-Scraping Evasion Techniques Utilize AI-driven approaches to adapt to bot detection and prevent blocks. Implement headless browser automation and request-mimicking strategies to mimic human behavior. Data Processing & Pipeline Management Extract, clean, and structure large-scale web data into structured formats like JSON, CSV, and databases. Optimize Scrapy pipelines for high-speed data processing and storage in MongoDB, PostgreSQL, or cloud storage (AWS S3). Code Quality & Performance Optimization Write clean, well-structured, and maintainable Python code for scraping solutions. Implement automated testing for data accuracy and scraper reliability. Continuously improve crawler efficiency by minimizing IP bans, request delays, and resource consumption. Required Skills and Experience Technical Expertise 5+ years of professional experience in Python development with a focus on web scraping. Proficiency in using Scrapy based scraping Strong understanding of HTML, CSS, JavaScript, and browser behavior. Experience with Docker will be a plus Expertise in handling APIs (RESTful and GraphQL) for data extraction. Proficiency in database systems like MongoDB, PostgreSQL Strong knowledge of version control systems like Git and collaboration platforms like GitHub. Key Attributes Strong problem-solving and analytical skills, with a focus on efficient solutions for complex scraping challenges. Excellent communication skills, both written and verbal. A passion for data and a keen eye for detail Why Join Us? Work on cutting-edge scraping technologies and AI-driven solutions. Collaborate with a team of talented professionals in a growth-driven environment. Opportunity to influence the development of data-driven business strategies through advanced scraping techniques. Competitive compensation and benefits.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Automation Intern Location: Remote Duration: 3-6 months Type: Unpaid Internship About CollegePur: CollegePur is a remote educational consultancy dedicated to guiding students through their academic journeys. We focus on providing insightful resources and personalized support to help students make informed decisions about their education and careers. Overview: We are seeking motivated Python Programming Interns to join our team. In this role, you will develop automated solutions to extract and organize information from public online sources. This experience will provide exposure to real-world data workflows and foundational concepts in data engineering and automation. Key Responsibilities: Develop Python scripts to automate the retrieval of structured and semi-structured data from web and API sources. Process, clean, and structure extracted information into usable formats (e.g., CSV, JSON). Assist in building simple pipelines for data refresh and scheduled extraction. Collaborate with internal teams to understand data needs and deliver high-quality datasets. Maintain clear documentation of code, logic, and workflow steps. Preferred Skills: Proficiency in Python programming. Familiarity with web interaction tools such as Requests, BeautifulSoup, Selenium, or Scrapy. Understanding of HTML/CSS/DOM structures. Experience working with pandas for data manipulation and transformation. Basic knowledge of REST APIs and JSON data structures. Exposure to ETL concepts, especially the extract and transform stages, is a plus. Strong analytical and debugging skills. Eligibility: Currently enrolled in a Bachelor's or Master's program in Computer Science, Data Science, or a related field. Self-taught developers with Python automation projects or coursework. Aspiring data engineers or developers interested in real-world data workflows. Individuals who enjoy working with data and turning it into structured, usable information. Perks: Flexible working hours and remote work opportunity. Mentorship from experienced developers and analysts. Hands-on experience with live data extraction and automation projects. Certificate of completion and Letter of recommendation Show more Show less

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Mohali

On-site

GlassDoor logo

Hiring: Python Developer (Web Scraping) – Mohali Location Location : Mohali Email : hr@baselineitdevelopment.com Contact : 9888122266 Experience : 1 Year Required Skills : Strong knowledge of Python. Hands-on experience in Web Scraping using tools like BeautifulSoup, Scrapy, or Selenium. Understanding of APIs and data parsing. Familiarity with data cleaning and automation scripts. Basic knowledge of databases (MySQL/PostgreSQL) is a plus. Good problem-solving skills and ability to write clean, efficient code Roles & Responsibilities : Develop and maintain web scraping scripts to extract data from various websites. Work with large datasets and automate data collection processes. Ensure data accuracy and consistency. Collaborate with the team to understand requirements and deliver solutions Job Types: Full-time, Permanent Pay: From ₹20,000.00 per month Schedule: Day shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Work Location: In person

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies