Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
8 - 18 Lacs
Pune, Sonipat
Work from Office
About the Role Overview: Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As Indias first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Engineer + Associate Instructor Data Mining to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering and Anomaly Detections. Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. Contribute to the academic and research environment of the department and the university. Required Qualifications: A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. Excellent communication and interpersonal skills. Preferred Qualifications: A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: Competitive salary packages aligned with industry standards. Access to state-of-the-art labs and classroom facilities. To know more about us, feel free to explore our website: Newton School of Technology We look forward to the possibility of having you join our academic team and help shape the future of tech education!
Posted 3 days ago
3.0 - 4.0 years
4 - 9 Lacs
Hyderābād
On-site
Job Title: Senior Python Developer – Trading Systems & Market Data Experience: 3–4 Years Location: Hyderabad, Telangana (On-site) Employment Type: Full-Time About the Role: We are seeking a Senior Python Developer with 3–4 years of experience and a strong understanding of stock market dynamics, technical indicators, and trading systems. You’ll take ownership of backtesting frameworks, strategy optimization, and developing high-performance, production-ready trading modules. The ideal candidate is someone who can think critically about trading logic, handle edge cases with precision, and write clean, scalable, and testable code. You should be comfortable working in a fast-paced, data-intensive environment where accuracy and speed are key. Key Responsibilities: Design and maintain robust backtesting and live trading frameworks. Build modules for strategy development, simulation, and optimization. Integrate with real-time and historical market data sources (e.g., APIs, databases). Use libraries like Pandas, NumPy, TA-Lib, Matplotlib, SciPy, etc., for data processing and signal generation. Apply statistical methods to validate strategies (mean, regression, correlation, standard deviation, etc.). Optimize code for low-latency execution and memory efficiency. Collaborate with traders and quants to implement and iterate on ideas. Use Git and manage codebases with best practices (unit testing, modular design, etc.). Required Skills & Qualifications: 3–4 years of Python development experience, especially in data-intensive environments. Strong understanding of algorithms, data structures, and performance optimization. Hands-on with technical indicators, trading strategy design, and data visualization. Proficient with Pandas, NumPy, Matplotlib, SciPy, TA-Lib, etc. Strong SQL skills and experience working with structured and time-series data. Exposure to REST APIs, data ingestion pipelines, and message queues (e.g., Kafka, RabbitMQ) is a plus. Experience in version control systems (Git) and collaborative development workflows. Preferred Experience: Hands-on experience with trading platforms or algorithmic trading systems. Familiarity with order management systems (OMS), execution logic, or market microstructure. Prior work with cloud infrastructure (AWS, GCP) or Docker/Kubernetes. Knowledge of machine learning or reinforcement learning in financial contexts is a bonus. What You’ll Get: Opportunity to work on real-world trading systems with measurable impact. A collaborative and fast-paced environment. A role where your ideas directly translate to production and trading performance. Job Type: Full-time Pay: ₹400,000.00 - ₹900,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person
Posted 3 days ago
3.0 years
5 - 6 Lacs
Gurgaon
On-site
Gurgaon 4 3+ Years Full Time We are looking for a technically adept and instructionally strong AI Developer with core expertise in Python, Large Language Models (LLMs), prompt engineering, and vector search frameworks such as FAISS, LlamaIndex, or RAG-based architectures. The ideal candidate combines solid foundations in data science, statistics, and machine learning development with a hands-on understanding of ML DevOps, model selection, and deployment pipelines. 3–4 years of experience in applied machine learning or AI development, including at least 1–2 years working with LLMs, prompt engineering, or vector search systems. Core Skills Required: Python: Advanced-level expertise in scripting, data manipulation, and model development LLMs (Large Language Models): Practical experience with GPT, LLaMA, Mistral, or open- source transformer models Prompt Engineering: Ability to design, optimize, and instruct on prompt patterns for various use cases Vector Search & RAG: Understanding of feature vectors, nearest neighbor search, and retrieval-augmented generation (RAG) using tools like FAISS, Pinecone, Chroma, or Weaviate LlamaIndex: Experience building AI applications using LlamaIndex, including indexing documents and building query pipelines Rack Knowledge: Familiarity with RACK architecture, model placement, and scaling on distributed hardware ML / ML DevOps: Knowledge of full ML lifecycle including feature engineering, model selection, training, and deployment Data Science & Statistics: Solid grounding in statistical modeling, hypothesis testing, probability, and computing concepts Responsibilities: Design and develop AI pipelines using LLMs and traditional ML models Build, fine-tune, and evaluate large language models for various NLP tasks Design prompts and RAG-based systems to optimize output relevance and factual grounding Implement and deploy vector search systems integrated with document knowledge bases Select appropriate models based on data and business requirements Perform data wrangling, feature extraction, and model training Develop training material, internal documentation, and course content (especially around Python and AI development using LlamaIndex) Work with DevOps to deploy AI solutions efficiently using containers, CI/CD, and cloud infrastructure Collaborate with data scientists and stakeholders to build scalable, interpretable solutions Maintain awareness of emerging tools and practices in AI and ML ecosystems Preferred Tools & Stack: Languages: Python, SQL ML Frameworks: Scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers Vector DBs: FAISS, Pinecone, Chroma, Weaviate RAG Tools: LlamaIndex, LangChain ML Ops: MLflow, DVC, Docker, Kubernetes, GitHub Actions Data Tools: Pandas, NumPy, Jupyter Visualization: Matplotlib, Seaborn, Streamlit Cloud: AWS/GCP/Azure (S3, Lambda, Vertex AI, SageMaker) Ideal Candidate: Background in Data Science, Statistics, or Computing Passionate about emerging AI tech, LLMs, and real-world applications Demonstrates both hands-on coding skills and teaching/instructional abilities Capable of building reusable, explainable AI solutions Location gurgaon sector 49
Posted 3 days ago
0 years
10 - 30 Lacs
Sonipat
Remote
Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Mining Engineer to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: ● Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. ● Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. ● Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering, and Anomaly Detections. ● Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. ● Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). ● Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). ● Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. ● Contribute to the academic and research environment of the department and the university. Required Qualifications: ● A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. ● Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. ● Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). ● Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. ● Excellent communication and interpersonal skills. Preferred Qualifications: ● A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. ● Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. ● Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). ● Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: ● Competitive salary packages aligned with industry standards. ● Access to state-of-the-art labs and classroom facilities. ● To know more about us, feel free to explore our website: Newton School of Technology. We look forward to the possibility of having you join our academic team and help shape the future of tech education! Job Type: Full-time Pay: ₹1,000,000.00 - ₹3,000,000.00 per year Benefits: Food provided Health insurance Leave encashment Paid sick time Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Application Question(s): Are you interested in a full-time time onsite Instructor role? Are you ready to relocate to Sonipat - NCR Delhi? Are you ready to relocate to Pune? Work Location: In person Expected Start Date: 15/07/2025
Posted 3 days ago
1.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Title: Bioinformatician Date: 20 Jun 2025 Job Location: Bangalore Pay Grade Year of Experience: Job Title: Bioinformatician Job Location: Bangalore About Syngene: Syngene ( www.syngeneintl.com ) is an innovation-led contract research, development and manufacturing organization offering integrated scientific services from early discovery to commercial supply. At Syngene, safety is at the heart of everything we do personally and professionally. Syngene has placed safety at par with business performance with shared responsibility and accountability, including following safety guidelines, procedures, and SOPs, in letter and spirit Overall adherence to safe practices and procedures of oneself and the teams aligned Contributing to development of procedures, practices and systems that ensures safe operations and compliance to company’s integrity & quality standards Driving a corporate culture that promotes environment, health, and safety (EHS) mindset and operational discipline at the workplace at all times. Ensuring safety of self, teams, and lab/plant by adhering to safety protocols and following environment, health, and safety (EHS) requirements at all times in the workplace. Ensure all assigned mandatory trainings related to data integrity, health, and safety measures are completed on time by all members of the team including self Compliance to Syngene’ s quality standards at all times Hold self and their teams accountable for the achievement of safety goals Govern and Review safety metrics from time to time We are seeking a highly skilled and experienced computational biologist to join our team. The ideal candidate will have a proven track record in multi-omics data analysis. They will be responsible for integrative analyses and contributing to the development of novel computational approaches to uncover biological insights. Experience: 1-4 years Core Purpose of the Role To support data-driven biological research by performing computational analysis of omics data, and generating translational insights through bioinformatics tools and pipelines. Position Responsibilities Conduct comprehensive analyses of multi-omics datasets, including genomics, transcriptomics, proteomics, metabolomics, and epigenomics. Develop computational workflows to integrate various -omics data to generate inference and hypotheses for testing. Conduct differential expression and functional enrichment analyses. Implement and execute data processing workflows and automate the pipelines with best practices for version control, modularization, and documentation. Apply advanced multivariate data analysis techniques, including regression, clustering, and dimensionality reduction, to uncover patterns and relationships in large datasets. Collaborate with researchers, scientists, and other team members to translate computational findings into actionable biological insights. Educational Qualifications Master’s degree in bioinformatics. Mandatory Technical Skills Programming: Proficiency in Python for data analysis, visualization, and pipeline development. Multi-omics analysis: Proven experience in analyzing and integrating multi-omics datasets. Statistics: Knowledge of probability distributions, correlation analysis, and hypothesis testing. Data visualization: Strong understanding of data visualization techniques and tools (e.g., ggplot2, matplotlib, seaborn). Preferred Machine learning: Familiarity with AI/ML concepts Behavioral Skills Excellent communication skills Objective thinking Problem solving Proactivity Syngene Values All employees will consistently demonstrate alignment with our core values Excellence Integrity Professionalism Equal Opportunity Employer It is the policy of Syngene to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by applicable legislation or local law. In addition, Syngene will provide reasonable accommodations for qualified individuals with disabilities.
Posted 3 days ago
2.0 - 3.0 years
0 Lacs
Greater Chennai Area
On-site
Roles & Responsibilities To impart training and monitor the student life cycle for ensuring standard outcome. Conduct live-in person/virtual classes to train learners on Adv. Excel, Power BI, Python and adv Python libraries such as Numpy,Matplotlib, Pandas,seaborn, SciPy, SQL-MySQL, Data Analysis,basic statistical knowledge. Facilitate and support learners progress/journey to deliver personalized blended learning experience and achieve desired skill outcome Evaluate and grade learners Project Report, Project Presentation and other documents. Mentor learners during support, project and assessment sessions. Develop, validate and implement learning content, curriculum and training programs whenever applicable Liaison and support respective teams with schedule planning, learner progress, academic evaluation, learning management, etc Desired profile: 2-3 year of technical training exp in a corporate, or any ed-tech institute. (Not to source college lecturer, school teacher profile) Must be proficient in Adv. Excel, Power BI, Python and adv Python libraries such as Numpy. Matplotlib, Pandas, SciPy, Seaborn,SQL-MySQL, Data Analysis,basic statistical knowledge. Experience in training in Data Analysis Should have worked in as Data Analyst Must have good analysis or problem-solving skills Must have good communication and delivery skills Good Knowledge of Database (SQL, MySQL) Additional Advantage: Knowledge of Flask, Core Java
Posted 4 days ago
0 years
0 Lacs
India
Remote
Data Science Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The Data Science Internship is designed to equip you with the skills required to extract insights, build predictive models, and solve complex problems using data. Role Overview: As a Data Science Intern, you will work on real-world datasets to develop machine learning models, perform data wrangling, and generate actionable insights. This internship will help you strengthen your technical foundation in data science while working on projects that have a tangible business impact. Key Responsibilities: Collect, clean, and preprocess data from various sources Apply statistical methods and machine learning techniques to extract insights Build and evaluate predictive models for classification, regression, or clustering tasks Visualize data using libraries like Matplotlib, Seaborn, or tools like Power BI Document findings and present results to stakeholders in a clear and concise manner Collaborate with team members on data-driven projects and innovations Qualifications: Pursuing or recently completed a degree in Data Science, Computer Science, Mathematics, or a related field Proficiency in Python and data science libraries (NumPy, Pandas, Scikit-learn, etc.) Understanding of statistical analysis and machine learning algorithms Familiarity with SQL and data visualization tools or libraries Strong analytical, problem-solving, and critical thinking skills Eagerness to learn and apply data science techniques to solve real-world problems Internship Benefits: Hands-on experience with real datasets and end-to-end data science projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of data science projects and models
Posted 4 days ago
0.0 - 4.0 years
4 - 9 Lacs
Hyderabad, Telangana
On-site
Job Title: Senior Python Developer – Trading Systems & Market Data Experience: 3–4 Years Location: Hyderabad, Telangana (On-site) Employment Type: Full-Time About the Role: We are seeking a Senior Python Developer with 3–4 years of experience and a strong understanding of stock market dynamics, technical indicators, and trading systems. You’ll take ownership of backtesting frameworks, strategy optimization, and developing high-performance, production-ready trading modules. The ideal candidate is someone who can think critically about trading logic, handle edge cases with precision, and write clean, scalable, and testable code. You should be comfortable working in a fast-paced, data-intensive environment where accuracy and speed are key. Key Responsibilities: Design and maintain robust backtesting and live trading frameworks. Build modules for strategy development, simulation, and optimization. Integrate with real-time and historical market data sources (e.g., APIs, databases). Use libraries like Pandas, NumPy, TA-Lib, Matplotlib, SciPy, etc., for data processing and signal generation. Apply statistical methods to validate strategies (mean, regression, correlation, standard deviation, etc.). Optimize code for low-latency execution and memory efficiency. Collaborate with traders and quants to implement and iterate on ideas. Use Git and manage codebases with best practices (unit testing, modular design, etc.). Required Skills & Qualifications: 3–4 years of Python development experience, especially in data-intensive environments. Strong understanding of algorithms, data structures, and performance optimization. Hands-on with technical indicators, trading strategy design, and data visualization. Proficient with Pandas, NumPy, Matplotlib, SciPy, TA-Lib, etc. Strong SQL skills and experience working with structured and time-series data. Exposure to REST APIs, data ingestion pipelines, and message queues (e.g., Kafka, RabbitMQ) is a plus. Experience in version control systems (Git) and collaborative development workflows. Preferred Experience: Hands-on experience with trading platforms or algorithmic trading systems. Familiarity with order management systems (OMS), execution logic, or market microstructure. Prior work with cloud infrastructure (AWS, GCP) or Docker/Kubernetes. Knowledge of machine learning or reinforcement learning in financial contexts is a bonus. What You’ll Get: Opportunity to work on real-world trading systems with measurable impact. A collaborative and fast-paced environment. A role where your ideas directly translate to production and trading performance. Job Type: Full-time Pay: ₹400,000.00 - ₹900,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person
Posted 4 days ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are looking for a technically adept and instructionally strong AI Developer with core expertise in Python, Large Language Models (LLMs), prompt engineering, and vector search frameworks such as FAISS, LlamaIndex, or RAG-based architectures. The ideal candidate combines solid foundations in data science, statistics, and machine learning development with a hands-on understanding of ML DevOps, model selection, and deployment pipelines. 3–4 years of experience in applied machine learning or AI development, including at least 1–2 years working with LLMs, prompt engineering, or vector search systems. Core Skills Required Python: Advanced-level expertise in scripting, data manipulation, and model development LLMs (Large Language Models): Practical experience with GPT, LLaMA, Mistral, or open- source transformer models Prompt Engineering: Ability to design, optimize, and instruct on prompt patterns for various use cases Vector Search & RAG: Understanding of feature vectors, nearest neighbor search, and retrieval-augmented generation (RAG) using tools like FAISS, Pinecone, Chroma, or Weaviate LlamaIndex: Experience building AI applications using LlamaIndex, including indexing documents and building query pipelines Rack Knowledge: Familiarity with RACK architecture, model placement, and scaling on distributed hardware ML / ML DevOps: Knowledge of full ML lifecycle including feature engineering, model selection, training, and deployment Data Science & Statistics: Solid grounding in statistical modeling, hypothesis testing, probability, and computing concepts Responsibilities Design and develop AI pipelines using LLMs and traditional ML models Build, fine-tune, and evaluate large language models for various NLP tasks Design prompts and RAG-based systems to optimize output relevance and factual grounding Implement and deploy vector search systems integrated with document knowledge bases Select appropriate models based on data and business requirements Perform data wrangling, feature extraction, and model training Develop training material, internal documentation, and course content (especially around Python and AI development using LlamaIndex) Work with DevOps to deploy AI solutions efficiently using containers, CI/CD, and cloud infrastructure Collaborate with data scientists and stakeholders to build scalable, interpretable solutions Maintain awareness of emerging tools and practices in AI and ML ecosystems Preferred Tools & Stack Languages: Python, SQL ML Frameworks: Scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers Vector DBs: FAISS, Pinecone, Chroma, Weaviate RAG Tools: LlamaIndex, LangChain ML Ops: MLflow, DVC, Docker, Kubernetes, GitHub Actions Data Tools: Pandas, NumPy, Jupyter Visualization: Matplotlib, Seaborn, Streamlit Cloud: AWS/GCP/Azure (S3, Lambda, Vertex AI, SageMaker) Ideal Candidate Background in Data Science, Statistics, or Computing Passionate about emerging AI tech, LLMs, and real-world applications Demonstrates both hands-on coding skills and teaching/instructional abilities Capable of building reusable, explainable AI solutions Location gurgaon sector 49 APPLY NOW
Posted 4 days ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Title : Lead Data Scientist/ML Engineer (5+ years & above) Required Technical Skillset : (GenAI) Language : Python Framework : Scikit-learn, TensorFlow, Keras, PyTorch, Libraries : NumPy, Pandas, Matplotlib, SciPy, Scikit-learn - DataFrame, Numpy, boto3 Database : Relational Database(Postgres), NoSQL Database (MongoDB) Cloud : AWS cloud platforms Other Tools : Jenkins, Bitbucket, JIRA, Confluence A machine learning engineer is responsible for designing, implementing, and maintaining machine learning systems and algorithms that allow computers to learn from and make predictions or decisions based on data. The role typically involves working with data scientists and software engineers to build and deploy machine learning models in a variety of applications such as natural language processing, computer vision, and recommendation systems. The key responsibilities of a machine learning engineer includes : Collecting and preprocessing large volumes of data, cleaning it up, and transforming it into a format that can be used by machine learning models. Model building which includes Designing and building machine learning models and algorithms using techniques such as supervised and unsupervised learning, deep learning, and reinforcement learning. Evaluating the model performance of machine learning models using metrics such as accuracy, precision, recall, and F1 score. Deploying machine learning models in production environments and integrating them into existing systems using CI/CD Pipelines, AWS Sagemaker Monitoring the performance of machine learning models and making adjustments as needed to improve their accuracy and efficiency. Working closely with software engineers, product managers and other stakeholders to ensure that machine learning models meet business requirements and deliver value to the organization. Requirements And Skills Mathematics and Statistics : A strong foundation in mathematics and statistics is essential. They need to be familiar with linear algebra, calculus, probability, and statistics to understand the underlying principles of machine learning algorithms. Programming Skills Should be proficient in programming languages such as Python. The candidate should be able to write efficient, scalable, and maintainable code to develop machine learning models and algorithms. Machine Learning Techniques Should have a deep understanding of various machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning and should also be familiar with different types of models such as decision trees, random forests, neural networks, and deep learning. Data Analysis And Visualization Should be able to analyze and manipulate large data sets. The candidate should be familiar with data cleaning, transformation, and visualization techniques to identify patterns and insights in the data. Deep Learning Frameworks Should be familiar with deep learning frameworks such as TensorFlow, PyTorch, and Keras and should be able to build and train deep neural networks for various applications. Big Data Technologies A machine learning engineer should have experience working with big data technologies such as Hadoop, Spark, and NoSQL databases. They should be familiar with distributed computing and parallel processing to handle large data sets. Software Engineering A machine learning engineer should have a good understanding of software engineering principles such as version control, testing, and debugging. They should be able to work with software development tools such as Git, Jenkins, and Docker. Communication And Collaboration A machine learning engineer should have good communication and collaboration skills to work effectively with cross-functional teams such as data scientists, software developers, and business stakeholders. (ref:hirist.tech)
Posted 4 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Python Software Development Sr.Analyst Job Description In these roles, you will be responsible for: Design, implement, and test generative AI models using python and various frameworks such as Pandas, TensorFlow, PyTorch, and OpenAI. Research and explore new techniques and applications of generative AI, such as text, image, audio, and video synthesis, style transfer, data augmentation, and anomaly detection. Collaborate with other developers, researchers, and stakeholders to deliver high-quality and innovative solutions. Document and communicate the results and challenges of generative AI projects. Required Skills for this role include: Technical skills 3 + years Experience in developing Python frameworks such DL, ML, Flask At least 2 years of experience in developing generative AI models using python and relevant frameworks. Good knowledge in RPA Strong knowledge of machine learning, deep learning, and generative AI concepts and algorithms. Proficient in python and common libraries such as numpy, pandas, matplotlib, and scikit-learn. Familiar with version control, testing, debugging, and deployment tools. Excellent communication and problem-solving skills. Curious and eager to learn new technologies and domains. Desired Skills: Knowledge of Django, Web API Proficient exposure on MVC.
Posted 4 days ago
4.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from TCS! TCS IS HIRING FOR Azure Data Engineer Technical Skill Set - Pyspark, Azure Data Factory, Azure Data Bricks. Desired Experience Range 4-8 years Required Competencies: 1) Strong design and data solutioning skills 2) Pyspark hands-on experience with complex transformations and large dataset handling experience 3) Good command and hands-on experience in Python. Experience working with following concepts, packages, and tools, a. Object oriented and functional programming b. NumPy, Pandas, Matplotlib, requests, pytest c. Jupyter, PyCharm and IDLE d. Conda and Virtual Environment 4) Working experience must with Hive, HBase or similar 5) Azure Skills a. Must have working experience in Azure Data Lake, Azure Data Factory, Azure Databricks, Azure SQL Databases b. Azure DevOps c. Azure AD Integration, Service Principal, Pass-thru login etc. d. Networking – vnet, private links, service connections, etc. e. Integrations – Event grid, Service Bus etc. 6) Database skills a. Oracle, Postgres, SQL Server – any one database experience b. Oracle PL/SQL or T-SQL experience Data modelling
Posted 4 days ago
6.0 - 9.5 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Position Title : Full Stack Lead Developer Experience : 6-9.5 Years Job Overview We are seeking a highly skilled and versatile polyglot Full Stack Developer with expertise in modern front-end and back-end technologies, cloud-based solutions, AI/ML and Gen AI. The ideal candidate will have a strong foundation in full-stack development, cloud platforms (preferably Azure), and hands-on experience in Gen AI, AI and machine learning technologies. Key Responsibilities Develop and maintain web applications using Angular/React.js, .NET, and Python. Design, deploy, and optimize Azure native PaaS and SaaS services, including but not limited to Function Apps, Service Bus, Storage Accounts, SQL Databases, Key vaults, ADF, Data Bricks and REST APIs with Open API specifications. Implement security best practices for data in transit and rest. Authentication best practices – SSO, OAuth 2.0 and Auth0. Utilize Python for developing data processing and advanced AI/ML models using libraries like pandas, NumPy, scikit-learn and Langchain, Llamaindex, Azure OpenAI SDK Leverage Agentic frameworks like Crew AI, Autogen etc. Well versed with RAG and Agentic Architecture. Strong in Design patterns – Architectural, Data, Object oriented Leverage azure serverless components to build highly scalable and efficient solutions. Create, integrate, and manage workflows using Power Platform, including Power Automate, Power Pages, and SharePoint. Apply expertise in machine learning, deep learning, and Generative AI to solve complex problems. Primary Skills Proficiency in React.js, .NET, and Python. Strong knowledge of Azure Cloud Services, including serverless architectures and data security. Experience with Python Data Analytics libraries: pandas NumPy scikit-learn Matplotlib Seaborn Experience with Python Generative AI Frameworks: Langchain LlamaIndex Crew AI AutoGen Familiarity with REST API design, Swagger documentation, and authentication best practices. Secondary Skills Experience with Power Platform tools such as Power Automate, Power Pages, and SharePoint integration. Knowledge of Power BI for data visualization (preferred). Preferred Knowledge Areas – Nice To Have In-depth understanding of Machine Learning, deep learning, supervised, un-supervised algorithms. Qualifications Bachelor's or master's degree in computer science, Engineering, or a related field. 6~12 years of hands-on experience in full-stack development and cloud-based solutions. Strong problem-solving skills and ability to design scalable, maintainable solutions. Excellent communication and collaboration skills.
Posted 4 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders Leading the development and implementation of advanced reports and dashboard solutions to support business objectives. Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business needs Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What we’re looking for… You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree or six or more years of work experience Six or more years of relevant work experience Experience in managing a team of data scientists that supports a business function. Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc. A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders Leading the development and implementation of advanced reports and dashboard solutions to support business objectives. Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business needs Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What we’re looking for… You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree or six or more years of work experience Six or more years of relevant work experience Experience in managing a team of data scientists that supports a business function. Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc. A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 4 days ago
0 years
0 Lacs
India
Remote
Data Science Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH offers hands-on, project-based virtual internships that bridge the gap between academic knowledge and industry skills. Our Data Science Internship is curated to provide aspiring data professionals with real-world exposure in data collection, analysis, modeling, and decision-making using cutting-edge tools. 🚀 Internship Overview As a Data Science Intern , you’ll work on real-time datasets , apply machine learning models , and generate actionable insights . The role is ideal for individuals looking to strengthen their understanding of data pipelines, predictive modeling, and storytelling with data . 🔧 Key Responsibilities Clean, manipulate, and analyze structured and unstructured datasets Build and evaluate machine learning models for prediction/classification Apply statistical techniques to uncover trends and insights Work with tools such as Python, Pandas, NumPy, Scikit-learn, and Jupyter Notebooks Create visualizations using Matplotlib, Seaborn, or Power BI/Tableau Collaborate with mentors and peers to solve data-driven problems Document code, findings, and processes in clear and concise reports ✅ Qualifications Pursuing or recently completed a degree in Data Science, Computer Science, Statistics, Engineering , or related fields Proficient in Python and familiar with Pandas, NumPy, and basic ML libraries Strong foundation in statistics, probability , and data visualization Understanding of supervised and unsupervised learning algorithms Eager to learn, experiment, and solve complex problems using data 🎓 What You’ll Gain Practical experience with real-world datasets and projects Exposure to machine learning workflows and tools Internship Certificate upon successful completion Letter of Recommendation for top performers Opportunity for a Full-Time Offer based on performance A portfolio of data science projects to showcase in interviews
Posted 4 days ago
0 years
0 Lacs
India
Remote
📊 Data Analyst Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is an edtech startup focused on delivering industry-aligned, project-based virtual internships . Our Data Analyst Internship is designed to equip students and recent graduates with the analytical skills and practical tools needed to work with real-world data and support business decisions. 🚀 Internship Overview As a Data Analyst Intern , you will work on live projects involving data collection, cleaning, analysis, and visualization . You will gain hands-on experience using tools like Excel, SQL, Python , and Power BI/Tableau to extract insights and create impactful reports. 🔧 Key Responsibilities Gather, clean, and organize raw data from multiple sources Perform exploratory data analysis (EDA) to uncover patterns and trends Write efficient SQL queries to retrieve and manipulate data Create interactive dashboards and visual reports using Power BI or Tableau Use Python (Pandas, NumPy, Matplotlib) for data processing and analysis Present findings and recommendations through reports and presentations Collaborate with mentors and cross-functional teams on assigned projects ✅ Qualifications Pursuing or recently completed a degree in Data Science, Computer Science, IT, Statistics, Economics , or a related field Basic knowledge of Excel, SQL , and Python Understanding of data visualization and reporting concepts Strong analytical and problem-solving skills Detail-oriented, with good communication and documentation abilities Eagerness to learn and apply analytical techniques to real business problems 🎓 What You’ll Gain Practical experience in data analysis, reporting, and business intelligence Exposure to industry tools and real-life data scenarios A portfolio of dashboards and reports to showcase in interviews Internship Certificate upon successful completion Letter of Recommendation for top performers Opportunity for a Full-Time Offer based on performance
Posted 4 days ago
2.0 years
0 Lacs
India
Remote
Hiring for Senior Data Scientist Location :- Madhapur ( Hybrid ) Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. Proven experience as a Data Scientist or Data Science Trainer (2–5+ years preferred). Proficiency in Python, R, SQL, machine learning frameworks (e.g., Scikit-learn, TensorFlow, PyTorch), and visualization tools (e.g., Matplotlib, Tableau, Power BI). Strong communication, presentation, and public speaking skills. Experience with LMS platforms and remote teaching tools (Zoom, Google Meet, etc.). Job Types: Part-time, Internship Contract length: 2 months Expected hours: 2 per week Schedule: Day shift Work Location: In person
Posted 4 days ago
1.0 - 3.0 years
0 - 0 Lacs
Hyderābād
On-site
Job Information Date Opened 06/18/2025 Job Type Full time Industry Education Work Experience 1-3 years Salary ₹20,000 - ₹30,000 City Hyderabad State/Province Telangana Country India Zip/Postal Code 500001 About Us Fireblaze AI School is a part of Fireblaze Technologies which was started in April 2018 with a Vision to Up-Skill and Train in emerging technologies. Mission Statement “To Provide Measurable & Transformational Value To Learners Career” Vision Statement ““To Be The Most Successful & Respected Job-Oriented Training Provider Globally.” We Focus widely on creating a huge digital impact. Hence Our Strong Presence over Digital Platforms are a must have thing for use. Job Description Deliver engaging classroom and/or online training sessions on topics including: Python for Data Science Data Analytics using Excel and SQL Statistics and Probability Machine Learning and Deep Learning Data Visualization using Power BI / Tableau Create and update course materials, projects, assignments, and quizzes. Provide hands-on training and real-world project guidance. Evaluate student performance, provide constructive feedback, and track progress. Stay updated with the latest trends, tools, and technologies in Data Science. Mentor students during capstone projects and industry case studies. Coordinate with the academic and operations team for batch planning and feedback. Assist with the development of new courses and curriculum as needed. Requirements Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. Proficiency in Python, SQL, and data handling libraries (Pandas, NumPy, etc.). Hands-on knowledge of machine learning algorithms and frameworks like Scikit-learn, TensorFlow, or Keras. Experience with visualization tools like Power BI, Tableau, or Matplotlib/Seaborn. Strong communication, presentation, and mentoring skills. Prior teaching/training experience is a strong advantage. Certification in Data Science or Machine Learning (preferred but not mandatory).
Posted 4 days ago
0 years
0 - 0 Lacs
India
On-site
Blitz Academy is seeking a skilled and passionate Data Science Trainer/Faculty to join our academic team. The ideal candidate will deliver high-quality training sessions in Data Science, Machine Learning, and Python Programming, R ,and actively contribute to academic and real-world development projects. This role is perfect for professionals who enjoy mentoring while keeping their technical skills sharp. Key Responsibilities: Training & Mentorship: * Design and conduct interactive sessions on Python, Data Science, Machine Learning, Deep Learning, and related technologies. * Develop curriculum, tutorials, assignments, and evaluation tools tailored for students at different learning levels. * Offer individual mentorship and support to students to help them build strong foundational and practical knowledge. * Stay abreast of the latest industry trends, tools, and techniques and integrate them into teaching. Project Support: * Contribute to internal and external data science or analytics projects. * Guide students on project development, capstone projects, and real-world problem-solving. Required Skills & Competencies: * Proficient in Python programming, R * Hands-on experience with data science libraries: Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn, etc. * Basic exposure to TensorFlow or PyTorch. * Strong grasp of SQL databases and data querying. * Solid understanding of statistics, data wrangling, and data visualization techniques. * Good knowledge of Machine Learning and Deep Learning models. * Understanding of NLP and working with textual data. * Proficiency in tools like Excel and Microsoft Power BI for data analysis. * Strong communication and presentation skills. * Ability to break down complex concepts into simple, engaging explanations. Good to Have: * Familiarity with cloud platforms like AWS, Azure, or GCP. * Experience with deployment tools and model deployment techniques. Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹35,000.00 per month Schedule: Day shift Work Location: In person Application Deadline: 22/06/2025 Expected Start Date: 25/06/2025
Posted 4 days ago
0 years
0 - 0 Lacs
Cochin
On-site
Blitz Academy is seeking a skilled and passionate Data Science Trainer/Faculty to join our academic team. The ideal candidate will deliver high-quality training sessions in Data Science, Machine Learning, and Python Programming, R ,and actively contribute to academic and real-world development projects. This role is perfect for professionals who enjoy mentoring while keeping their technical skills sharp. Key Responsibilities: Training & Mentorship: Design and conduct interactive sessions on Python, Data Science, Machine Learning, Deep Learning, and related technologies. Develop curriculum, tutorials, assignments, and evaluation tools tailored for students at different learning levels. Offer individual mentorship and support to students to help them build strong foundational and practical knowledge. Stay abreast of the latest industry trends, tools, and techniques and integrate them into teaching. Project Support: Contribute to internal and external data science or analytics projects. Guide students on project development, capstone projects, and real-world problem-solving. Required Skills & Competencies: Proficient in Python programming, R Hands-on experience with data science libraries: Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn, etc. Basic exposure to TensorFlow or PyTorch. Strong grasp of SQL databases and data querying. Solid understanding of statistics, data wrangling, and data visualization techniques. Good knowledge of Machine Learning and Deep Learning models. Understanding of NLP and working with textual data. Proficiency in tools like Excel and Microsoft Power BI for data analysis. Strong communication and presentation skills. Ability to break down complex concepts into simple, engaging explanations. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person Expected Start Date: 01/07/2025
Posted 4 days ago
0 years
0 Lacs
Delhi
On-site
Job Description: Data Science Trainer (Contractual Role)Job Title: Data Science Trainer (Contractual) Location: Delhi Duration: 2 months Project-based Working Type: Contractual About the Role: We are seeking a contract-based Data Science Trainer to deliver engaging, hands-on training sessions to students and professionals. The ideal candidate will have practical experience in Data Science tools, techniques, and real-world applications, and be passionate about teaching and mentoring learners. Key Responsibilities: · Conduct instructor-led training (ILT) sessions in Data Science on campus. · Develop and customize training content, modules, assignments, and assessments. · Guide students through hands-on projects in Machine Learning, Data Science, Python, Data Analytics, etc. · Mentor students on capstone projects, code reviews, and career readiness. · Stay up to date with industry trends and integrate them into the training curriculum. · Evaluate learner performance and provide regular feedback. · Collaborate with the academic/curriculum team to enhance training delivery quality. Required Skills & Qualifications: · Bachelor's/Master's in Computer Science, Data Science, Statistics, or related fields. · Proven industry experience or teaching experience in: · - Python / R · - Pandas, NumPy, Scikit-learn · - Machine Learning Algorithms · - Data Visualization (Matplotlib, Seaborn, Tableau/Power BI) · - SQL · - Basics of Deep Learning (optional) · Strong communication and presentation skills. · Experience delivering training to college students or working professionals preferred. Preferred Qualifications (Not Mandatory): · Experience with online learning platforms (Zoom, Google Meet, LMS tools). · Certification in Data Science/ML (like IBM, Microsoft, Coursera, etc.). · Prior experience in an EdTech or corporate training environment. Contract Details: Contract Type: Monthly Payment: 1000-1500/Lecture/Day Expected Hours: Daily 6 Hours (6 Days/Week) Start Date: Immediate How to Apply: Please share your updated resume, a brief cover letter, and any relevant project or demo links to aayush@winnovation.org Job Type: Contractual / Temporary Contract length: 2 months Pay: ₹1,000.00 - ₹1,500.00 per day Schedule: Day shift Work Location: In person
Posted 4 days ago
7.0 - 12.0 years
18 - 25 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role & responsibilities Required Skills: Strong Python programming experience, especially with pandas, numpy, matplotlib, seaborn. Experience in building monitoring dashboards or visualizations (e.g., Plotly, Dash, Streamlit). Understanding of ML model evaluation metrics (e.g., precision, recall, drift, AUC). Familiarity with model risk management concepts or frameworks. Ability to write clean, well-documented code for reproducibility and audit-readiness. Comfortable interpreting and working with structured model output and log files. Excellent attention to detail and communication skills. Should have Experience in Banking Domain Preferred candidate profile Notice Period: Immediate to 30 Days
Posted 5 days ago
0.0 years
0 Lacs
Delhi, Delhi
On-site
Job Description: Data Science Trainer (Contractual Role)Job Title: Data Science Trainer (Contractual) Location: Delhi Duration: 2 months Project-based Working Type: Contractual About the Role: We are seeking a contract-based Data Science Trainer to deliver engaging, hands-on training sessions to students and professionals. The ideal candidate will have practical experience in Data Science tools, techniques, and real-world applications, and be passionate about teaching and mentoring learners. Key Responsibilities: · Conduct instructor-led training (ILT) sessions in Data Science on campus. · Develop and customize training content, modules, assignments, and assessments. · Guide students through hands-on projects in Machine Learning, Data Science, Python, Data Analytics, etc. · Mentor students on capstone projects, code reviews, and career readiness. · Stay up to date with industry trends and integrate them into the training curriculum. · Evaluate learner performance and provide regular feedback. · Collaborate with the academic/curriculum team to enhance training delivery quality. Required Skills & Qualifications: · Bachelor's/Master's in Computer Science, Data Science, Statistics, or related fields. · Proven industry experience or teaching experience in: · - Python / R · - Pandas, NumPy, Scikit-learn · - Machine Learning Algorithms · - Data Visualization (Matplotlib, Seaborn, Tableau/Power BI) · - SQL · - Basics of Deep Learning (optional) · Strong communication and presentation skills. · Experience delivering training to college students or working professionals preferred. Preferred Qualifications (Not Mandatory): · Experience with online learning platforms (Zoom, Google Meet, LMS tools). · Certification in Data Science/ML (like IBM, Microsoft, Coursera, etc.). · Prior experience in an EdTech or corporate training environment. Contract Details: Contract Type: Monthly Payment: 1000-1500/Lecture/Day Expected Hours: Daily 6 Hours (6 Days/Week) Start Date: Immediate How to Apply: Please share your updated resume, a brief cover letter, and any relevant project or demo links to aayush@winnovation.org Job Type: Contractual / Temporary Contract length: 2 months Pay: ₹1,000.00 - ₹1,500.00 per day Schedule: Day shift Work Location: In person
Posted 5 days ago
0.0 - 2.0 years
0 Lacs
Yelahanka Satellite Town, Bengaluru, Karnataka
On-site
Job Title: Data Scientist – Real Estate Analytics Location: Yelahanka, Bangalore | Full-Time | On-Site About Us: Dharmic Homez is a fast-growing luxury real estate startup building ultra-premium villas in Bangalore. We’re seeking a Data Scientist to bring efficiency, insight, and intelligence across our operations, sales, and market strategy. Key Responsibilities: Build predictive models for lead conversion, pricing, ROI, and demand forecasting. Analyze sales, CRM, marketing, and customer behavior data . Develop dashboards using Power BI / Tableau / Google Data Studio . Perform geospatial analysis using GIS tools & Google Maps API. Clean, transform, and manage large datasets using SQL, Python (Pandas, NumPy) . Deploy machine learning algorithms (regression, classification, clustering). Conduct competitive benchmarking via web scraping or APIs. Collaborate cross-functionally with marketing, sales, and leadership teams. Requirements: 3+ years experience in data science or analytics. Proficient in Python, SQL, scikit-learn, pandas, matplotlib, seaborn . Strong in statistics, data wrangling, data visualization, ML models . Experience with real estate data or customer analytics preferred. Knowledge of APIs, ETL, and cloud tools (AWS/GCP) is a plus. Apply at careers@dharmichomez.in with subject line: Data Scientist – Real Estate Job Type: Full-time Pay: ₹500,000.00 - ₹850,000.00 per year Schedule: Day shift Ability to commute/relocate: Yelahanka Satellite Town, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Do you have forecasting experience in real estate? Experience: Data science: 2 years (Preferred) Location: Yelahanka Satellite Town, Bengaluru, Karnataka (Preferred) Work Location: In person
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Matplotlib is a popular data visualization library in Python that is widely used in various industries. Job opportunities for matplotlib professionals in India are on the rise due to the increasing demand for data visualization skills. In this article, we will explore the job market for matplotlib in India and provide insights for job seekers looking to build a career in this field.
Here are 5 major cities in India actively hiring for matplotlib roles: 1. Bangalore 2. Delhi 3. Mumbai 4. Hyderabad 5. Pune
The average salary range for matplotlib professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10 lakhs per annum.
Career progression in the field of matplotlib typically follows a path from Junior Developer to Senior Developer to Tech Lead. As professionals gain more experience and expertise in data visualization using matplotlib, they can take on more challenging roles and responsibilities.
In addition to proficiency in matplotlib, professionals in this field are often expected to have knowledge and experience in the following areas: - Python programming - Data analysis - Data manipulation - Statistics - Machine learning
Here are 25 interview questions for matplotlib roles:
- What is matplotlib and how is it used in data visualization? (basic)
- What are the different types of plots that can be created using matplotlib? (basic)
- How would you customize the appearance of a plot in matplotlib? (medium)
- Explain the difference between plt.show()
and plt.savefig()
in matplotlib. (medium)
- How do you handle missing data in a dataset before visualizing it using matplotlib? (medium)
- What is the purpose of the matplotlib.pyplot.subplots()
function? (advanced)
- How would you create a subplot with multiple plots in matplotlib? (medium)
- Explain the use of matplotlib.pyplot.bar()
and matplotlib.pyplot.hist()
functions. (medium)
- How can you annotate a plot in matplotlib? (basic)
- Describe the process of creating a 3D plot in matplotlib. (advanced)
- How do you set the figure size in matplotlib? (basic)
- What is the purpose of the matplotlib.pyplot.scatter()
function? (medium)
- How would you create a line plot with multiple lines using matplotlib? (medium)
- Explain the difference between plt.plot()
and plt.scatter()
in matplotlib. (medium)
- How do you add a legend to a plot in matplotlib? (basic)
- Describe the use of color maps in matplotlib. (medium)
- How can you save a plot as an image file in matplotlib? (basic)
- What is the purpose of the matplotlib.pyplot.subplots_adjust()
function? (medium)
- How do you create a box plot in matplotlib? (medium)
- Explain the use of matplotlib.pyplot.pie()
function for creating pie charts. (medium)
- How would you create a heatmap in matplotlib? (advanced)
- What are the different types of coordinate systems in matplotlib? (advanced)
- How do you handle axis limits and ticks in matplotlib plots? (medium)
- Explain the role of matplotlib.pyplot.imshow()
function. (medium)
- How would you create a bar plot with error bars in matplotlib? (advanced)
As the demand for data visualization skills continues to grow, mastering matplotlib can open up exciting job opportunities in India. By preparing thoroughly and showcasing your expertise in matplotlib, you can confidently apply for roles and advance your career in this dynamic field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane