Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
10 - 20 Lacs
Pune
Work from Office
5-8 years of experience in automation testing with Python/Advanced Python. Proficiency in web application and REST API testing. ISTQB Foundation Level certification is a plus.
Posted 6 days ago
4.0 - 7.0 years
5 - 7 Lacs
Kolkata, Bhiwani, Raipur
Work from Office
Joining Location: Raipur, Chhattisgarh (Relocation Required) Experience: Minimum 4+ Years Job Type: Full-Time Accommodation: Provided by the Company Job Description: We are looking for an experienced and passionate Python Trainer or Teacher to join our training division. The selected candidate will be responsible for delivering Python training sessions to university or college students / Corporate employees at assigned locations. This is an exciting opportunity for individuals who are enthusiastic about teaching and have a strong command on Python programming. Key Responsibilities: Deliver structured and engaging training sessions on Python programming to university students. Develop, update, and maintain training content, assignments, and assessments. Evaluate students' performance through assessments, quizzes, and practical projects. Ensure the training objectives are met within the given timelines. Assist in resolving students' doubts and provide additional mentoring when needed. Relocate to various training locations as per project requirements (initial joining at Raipur). Provide feedback to the internal team on course content and student engagement. Required Skills & Qualifications: Minimum 4+ years of experience in Python development and/or training. Strong understanding of core Python concepts, libraries, and frameworks. Good communication and classroom management skills. Prior experience in training college/university-level /corporate employees or students is a plus. Willingness to relocate and stay at different locations during training assignments ranging from a minimum of one semester (6 months) or more. Flexibility to adapt to dynamic project needs and schedules. Location & Travel: Initial joining location is Raipur, Chhattisgarh . Trainers can be based from anywhere in India but must be willing to relocate as per University location. Accommodation will be provided by the company at the training site or University.
Posted 1 week ago
1.0 - 3.0 years
8 - 10 Lacs
Mysore, Karnataka, India
On-site
Technical Skills Required: ETL Concepts: Strong understanding of Extract, Transform, Load (ETL) processes. Ability to design, develop, and maintain robust ETL pipelines. Database Fundamentals: Proficiency in working with relational databases (e.g., MySQL, PostgreSQL, Oracle, or MS SQL Server). Knowledge of database design and optimization techniques. Basic Data Visualization: Ability to create simple dashboards or reports using visualization tools (e.g., Tableau, Power BI, or similar). Query Optimization: Expertise in writing efficient, optimized queries to handle large datasets. Testing and Documentation: Experience in validating data accuracy and integrity through rigorous testing. Ability to document data workflows, processes, and technical specifications clearly. Key Responsibilities: Data Engineering Tasks: Design, develop, and implement scalable data pipelines to support business needs. Ensure data quality and integrity through testing and monitoring. Optimize ETL processes for performance and reliability. Database Management: Manage and maintain databases, ensuring high availability and security. Troubleshoot database-related issues and optimize performance. Collaboration: Work closely with data analysts, data scientists, and other stakeholders to understand and deliver on data requirements. Provide support for data-related technical issues and propose solutions. Documentation and Reporting: Create and maintain comprehensive documentation for data workflows and technical processes. Develop simple reports or dashboards to visualize key metrics and trends. Learning and Adapting: Stay updated with new tools, technologies, and methodologies in data engineering. Adapt quickly to new challenges and project requirements. Additional Requirements: Strong communication skills, both written and verbal. Analytical mindset with the ability to solve complex data problems. Quick learner and willingness to adopt new tools and technologies as needed. Flexibility to work in shifts, if required. Preferred Skills (Not Mandatory): Experience with cloud platforms (e.g., AWS, Azure, or GCP). Familiarity with big data technologies such as Hadoop or Spark. Basic understanding of machine learning concepts and data science workflows. Mandatory Key Skills Python Programming, ETL Concepts, Database Management, Query Optimization, Data Visualization, Cloud Platform, AWS, Azure, GCP, Advanced Python, Tableau, Power BI, SQL
Posted 1 week ago
4.0 - 9.0 years
8 - 18 Lacs
Navi Mumbai, Pune, Mumbai (All Areas)
Hybrid
Job Description : Job Overview: We are seeking a highly skilled Data Engineer with expertise in SQL, Python, Data Warehousing, AWS, Airflow, ETL, and Data Modeling . The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines, ensuring efficient data processing and integration across various platforms. This role requires strong problem-solving skills, an analytical mindset, and a deep understanding of modern data engineering frameworks. Key Responsibilities: Design, develop, and optimize scalable data pipelines and ETL processes to support business intelligence, analytics, and operational data needs. Build and maintain data models (conceptual, logical, and physical) to enhance data storage, retrieval, and transformation efficiency. Develop, test, and optimize complex SQL queries for efficient data extraction, transformation, and loading (ETL). Implement and manage data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) for structured and unstructured data storage. Work with AWS, Azure , and cloud-based data solutions to build high-performance data ecosystems. Utilize Apache Airflow for orchestrating workflows and automating data pipeline execution. Collaborate with cross-functional teams to understand business data requirements and ensure alignment with data strategies. Ensure data integrity, security, and compliance with governance policies and best practices. Monitor, troubleshoot, and improve the performance of existing data systems for scalability and reliability. Stay updated with emerging data engineering technologies, frameworks, and best practices to drive continuous improvement. Required Skills & Qualifications: Proficiency in SQL for query development, performance tuning, and optimization. Strong Python programming skills for data processing, automation, and scripting. Hands-on experience with ETL development , data integration, and transformation workflows. Expertise in data modeling for efficient database and data warehouse design. Experience with cloud platforms such as AWS (S3, Redshift, Lambda), Azure, or GCP. Working knowledge of Airflow or similar workflow orchestration tools. Familiarity with Big Data frameworks like Hadoop or Spark (preferred but not mandatory). Strong problem-solving skills and ability to work in a fast-paced, dynamic environment. Role & responsibilities Preferred candidate profile
Posted 1 week ago
5.0 - 7.0 years
25 - 35 Lacs
Bengaluru
Work from Office
Senior Software Engineer - Backend (Python) Experience: 5 - 7 Years Exp. Salary : INR 25-35 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 09:00AM to 06:00PM IST Opportunity Type: Onsite (Bengaluru) Placement Type: Contractual Contract Duration: Full-Time, Indefinite Period (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Advanced python, FastAPI, Api & microservices architecture, Cloud infrastructure (aws), Docker/Kubernetes, Database management (PostgreSQL/ MySQL/ MongoDB/ Redis), Integration with ML/Video Systems, Flask/ Django Good to have skills : Asynchronous programming, security best practices, Stream Processing & Messaging, Domain Knowledge in AI/ computer vision Radius AI (One of Uplers' Clients) is Looking for: Senior Software Engineer - Backend (Python) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description RadiusAI is looking for a Senior Software Engineer (Python) to build and optimize the backend infrastructure that drives our real-time AI products. This is a hands-on role ideal for an engineer who has a deep understanding of backend architecture, API design, and distributed systems and can scale systems to support intensive machine learning and video processing workloads. You will be a key part of a cross-functional team building robust, scalable, and secure platforms for AI deployment. Key Responsibilities Design and implement backend services, APIs, and data pipelines to support AI and CV platforms. Build scalable microservices and RESTful APIs using Python (FastAPI, Flask, or Django). Integrate with computer vision systems and ML inference engines via APIs or streaming data interfaces. Optimize system performance for real-time or near-real-time processing, especially in video-based environments. Work with cloud services (AWS, GCP, or Azure) for deployment, scaling, and observability. Implement robust logging, monitoring, and alerting across backend services. Collaborate closely with ML engineers, DevOps, and frontend teams to deliver full-stack features. Own the entire software development lifecycle: architecture, development, testing, deployment, and maintenance. Write clean, testable, scalable, and maintainable code. Participate in code reviews, mentoring, and setting engineering best practices. Required Qualifications 5+ years of experience in backend development, with Python as the primary language. Strong experience with Python web frameworks such as FastAPI, Django, or Flask. Expertise in designing and building RESTful APIs and microservices architectures.\ Solid understanding of software architecture, design patterns, and scalability principles. Experience working with databases (PostgreSQL, MySQL, MongoDB, Redis, etc.). Proficient with Docker, Kubernetes and experience containerizing applications for local and cloud deployment. Hands-on experience working with cloud platforms. Experience integrating with machine learning models and handling high-throughput data (image/video or time-series is a plus).\ Familiarity with CI/CD practices, Git, unit testing, and agile methodologies.\ Strong problem-solving skills and a collaborative mindset. Preferred Qualifications Experience with asynchronous programming (e.g., asyncio, aiohttp, FastAPI with async). Familiarity with message queues and stream processing (Kafka, RabbitMQ, Redis Streams, etc.). Exposure to real-time data processing systems, especially involving video or IoT sensor data. Knowledge of security best practices in backend systems (authentication, authorization, rate limiting). Prior experience in computer vision or AI-focused products is a strong plus. Contributions to open-source Python projects or backend infrastructure tooling. Interview rounds 1st - Technical screening 2nd - Live coding 3rd - Technical & cultural discussion How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: RadiusAI is a pioneering computer vision analytics company revolutionizing retail operations with advanced, human-centric AI solutions. We offer the world's most advanced VisionAI checkout and we provide real-time data to improve operational efficiency across the entire retail industry, focusing on enterprise-level customers and secure edge integration About Uplers: Uplers is the #1 hiring platform for SaaS companies, designed to help you hire top product and engineering talent quickly and efficiently. Our end-to-end AI-powered platform combines artificial intelligence with human expertise to connect you with the best engineering talent from India. With over 1M deeply vetted professionals, Uplers streamlines the hiring process, reducing lengthy screening times and ensuring you find the perfect fit. Companies like GitLab, Twilio, TripAdvisor, and AirBnB trust Uplers to scale their tech and digital teams effectively and cost-efficiently. Experience a simpler, faster, and more reliable hiring process with Uplers today.
Posted 1 week ago
4.0 - 7.0 years
4 - 9 Lacs
Pune
Hybrid
Role Overview: This hybrid role sits within the Distribution Data Stewardship Team and combines operational and technical responsibilities to ensure data accuracy, integrity, and process optimization across sales reporting functions. Key Responsibilities: Support sales reporting inquiries from sales staff at all levels. Reconcile omnibus activity with sales reporting systems. Analyze data flows to assess impact on commissions and reporting. Perform data audits and updates to ensure integrity. Lead process optimization and automation initiatives. Manage wholesaler commission processes, including adjustments and manual submissions. Oversee manual data integration from intermediaries. Execute territory alignment changes to meet business objectives. Contribute to team initiatives and other responsibilities as assigned. Growth Opportunities: Exposure to all facets of sales reporting and commission processes. Opportunities to develop project and relationship management skills. Potential to explore leadership or technical specialist roles within the firm. Qualifications: Bachelors degree in Computer Engineering or a related field. 4–7 years of experience with Python programming and automation . Strong background in SQL and data analysis . Experience in relationship/customer management and leading teams . Experience working with Salesforce is a plus. Required Skills: Technical proficiency in Python and SQL . Strong communication skills and stakeholder engagement. High attention to data integrity and detail . Self-directed with excellent time management. Project coordination and documentation skills. Proficiency in MS Office , especially Excel .
Posted 2 weeks ago
8.0 - 13.0 years
25 - 35 Lacs
Bengaluru
Work from Office
Johnson & Johnson MedTech is seeking a Sr Eng Data Engineering for Digital Surgery Platform (DSP) in Bangalore, India. Johnson & Johnson (J&J) stands as the world's leading manufacturer of healthcare products and a service provider in the pharmaceutical and medical device sectors. At Johnson & Johnson MedTech's Digital Surgery Platform, we are groundbreaking the future of healthcare by harnessing the power of people and technology, transitioning to a digital-first MedTech enterprise. With a focus on innovation and an ambitious strategic vision, we are integrating robotic-assisted surgery platforms, connected medical devices, surgical instruments, medical imaging, surgical efficiency solutions, and OR workflow into the next-generation MedTech platform. This initiative will also foster new surgical insights, improve supply chain innovation, use cloud infrastructure, incorporate cybersecurity, collaborate with hospital EMRs, and elevate our digital solutions. We are a diverse and growing team, that nurture creativity, deep understanding of data processing techniques, and the use of sophisticated analytics technologies to deliver results. Overview As a Sr Eng Data Engineering for J&J MedTech Digital Surgery Platform (DSP), you will play a pivotal role in building the modern cloud data platform by demonstrating your in-depth technical expertise and interpersonal skills. In this role, you will be required to focus on accelerating digital product development as part of the multifunctional and fast-paced DSP data platform team and will give to the digital transformation through innovative data solutions. One of the key success criteria for this role is to ensure the quality of DSP software solutions and demonstrate the ability to collaborate effectively with the core infrastructure and other engineering teams and work closely with the DSP security and technical quality partners. Responsibilities Work with platform data engineering, core platform, security, and technical quality to design, implement and deploy data engineering solutions. Develop pipelines for ingestion, transformation, orchestration, and consumption of various types of data. Design and deploy data layering pipelines that use modern Spark based data processing technologies such as Databricks and Delta Live Table (DLT). Integrate data engineering solutions with Azure data governance components not limited to Purview and Databricks Unity Catalog. Implement and support security monitoring solutions within Azure Databricks ecosystem. Design, implement, and support data monitoring solutions in data analytical workspaces. Configure and deploy Databricks Analytical workspaces in Azure with IaC (Terraform, Databricks API) with J&J DevOps automation tools within JPM/Xena framework. Implement automated CICD processes for data processing pipelines. Support DataOps for the distributed DSP data architecture. Function as a data engineering SME within the data platform. Manage authoring and execution of automated test scripts. Build effective partnerships with DSP architecture, core infrastructure and other domains to design and deploy data engineering solutions. Work closely with the DSP Product Managers to understand business needs, translate them to system requirements, demonstrate in-depth understanding of use cases for building prototypes and solutions for data processing pipelines. Operate in SAFe Agile DevOps principles and methodology in building quality DSP technical solutions. Author and implement automated test scripts as mandates DSP quality requirements. Qualifications Required Bachelors degree or equivalent experience in software or computer science or data engineering. 8+ years of overall IT experience. 5-7 years of experience in cloud computing and data systems. Advanced Python programming skills. Expert level in Azure Databricks Spark technology and data engineering (Python) including Delta Live Tables (DLT). Experience in design and implementation of secure Azure data solutions. In-depth knowledge of the data architecture infrastructure, network components, data processing Proficiency in building data pipelines in Azure Databricks. Proficiency in configuration and administration of Azure Databricks workspaces and Databricks Unity Catalog. Deep understanding of principles of modern data Lakehouse. Deep understanding of Azure system capabilities, data services, and ability to implement security controls. Proficiency with enterprise DevOps tools including Bitbucket, Jenkins, Artifactory. Experience with DataOps. Experience with quality software systems. Deep understanding of and experience in SAFe Agile. Understanding of SDLC. Preferred Master’s degree or equivalent. Proven healthcare experience. Azure Databricks certification. Ability to analyze use cases and translate them into system requirements, make data driven decisions DevOps's automation tools with JPM/Xena framework. Expertise in automated testing. Experience in AI and MLs. Excellent verbal and written communication skills. Ability to travel up to 10% of domestic required. Johnson & Johnson is an Affirmative Action and Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, or protected veteran status and will not be discriminated against on the basis of disability.
Posted 2 weeks ago
1.0 - 3.0 years
7 - 10 Lacs
Mysuru
Work from Office
Technical Skills Required: ETL Concepts: Strong understanding of Extract, Transform, Load (ETL) processes. Ability to design, develop, and maintain robust ETL pipelines. Database Fundamentals: Proficiency in working with relational databases (e.g., MySQL, PostgreSQL, Oracle, or MS SQL Server). Knowledge of database design and optimization techniques. Basic Data Visualization: Ability to create simple dashboards or reports using visualization tools (e.g., Tableau, Power BI, or similar). Query Optimization: Expertise in writing efficient, optimized queries to handle large datasets. Testing and Documentation: Experience in validating data accuracy and integrity through rigorous testing. Ability to document data workflows, processes, and technical specifications clearly. Key Responsibilities: Data Engineering Tasks: Design, develop, and implement scalable data pipelines to support business needs. Ensure data quality and integrity through testing and monitoring. Optimize ETL processes for performance and reliability. Database Management: Manage and maintain databases, ensuring high availability and security. Troubleshoot database-related issues and optimize performance. Collaboration: Work closely with data analysts, data scientists, and other stakeholders to understand and deliver on data requirements. Provide support for data-related technical issues and propose solutions. Documentation and Reporting: Create and maintain comprehensive documentation for data workflows and technical processes. Develop simple reports or dashboards to visualize key metrics and trends. Learning and Adapting: Stay updated with new tools, technologies, and methodologies in data engineering. Adapt quickly to new challenges and project requirements. Additional Requirements: Strong communication skills, both written and verbal. Analytical mindset with the ability to solve complex data problems. Quick learner and willingness to adopt new tools and technologies as needed. Flexibility to work in shifts, if required. Preferred Skills (Not Mandatory): Experience with cloud platforms (e.g., AWS, Azure, or GCP). Familiarity with big data technologies such as Hadoop or Spark. Basic understanding of machine learning concepts and data science workflows. Mandatory Key Skills Python Programming,ETL Concepts,Database Management,Query Optimization,Data Visualization,Cloud Platform,AWS,Azure,GCP,Advanced Python,Tableau,Power BI,SQL*
Posted 4 weeks ago
- 1 years
4 - 6 Lacs
Pune
Work from Office
Role & responsibilities Python Developers are software engineers who specialize in using the Python programming language to develop and maintain software applications. As a Python Developer, you'll be responsible for writing, testing, and debugging code in Python to create applications that can run on various platforms, such as web browsers or mobile devices. Your job description as a Python Developer includes working closely with other developers, designers, and project managers to deliver software projects on time and within budget constraints. You'll need to have a strong understanding of Python programming language, as well as knowledge of software development methodologies such as Agile or Waterfall. to excel in this role, you must also have excellent problem-solving skills and be able to pay attention to detail, as even the smallest error could cause problems in the software. Furthermore, Python Developers must have strong communication skills to work in teams or with clients to discuss project requirements. Preferred candidate profile your tasks Create large-scale data processing pipelines to help developers build and train novel machine learning algorithms. Participate in code reviews, ensure code quality and identify areas for improvement to implement practical solutions. Debugging codes when required and troubleshooting any Python-related queries. Keep up to date with emerging trends and technologies in Python development. Required skills and qualifications 3+ years of experience as a Python Developer with a strong portfolio of projects. Bachelor's degree in Computer Science, Software Engineering or a related field. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills. Preferred skills and qualifications Experience with popular Python frameworks such as Django, Flask or Pyramid. Knowledge of data science and machine learning concepts and tools. A working understanding of cloud platforms such as AWS, Google Cloud or Azure. Contributions to open-source Python projects or active involvement in the Python community.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France