Jobs
Interviews

1598 Matplotlib Jobs - Page 38

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 1.0 years

1 - 3 Lacs

Hyderabad

Work from Office

Python Full Stack Developer Intern (Onsite Gachibowli, Hyderabad) Location: Onsite Vasavi Skycity, Gachibowli, Hyderabad Timings: US Shift (Night Shift) Stipend: 10,000 to 15,000 per month Cab Facility: Not Provided Internship Type: Full-time, Onsite Only Eligibility: Passed out in 2024 or before (2025 pass-outs will not be considered) Duration : 6 Months About the Role: We are looking for a highly skilled and self-driven Python Full Stack Developer Intern with strong hands-on coding abilities and in-depth technical understanding. This is not a training role we need contributors who can work independently on real-time development projects. Must-Have Requirements: Graduation Year: 2024 or before only Must have developed and deployed at least one complete dynamic web application or website independently (not an academic project) Deep technical knowledge in both frontend and backend development Excellent coding and debugging skills must be comfortable writing production-grade code Must be able to work independently without constant guidance Willingness to work onsite and during US hours Technical Skills Required: Backend: Python (Django / Flask / FastAPI) Frontend: HTML, CSS, JavaScript, React or Angular Database: PostgreSQL / MySQL / MongoDB Version Control: Git & GitHub Understanding of RESTful APIs , Authentication , and Security Practices Experience with deployment (Heroku, AWS, etc.) is a plus Nice-to-Have Skills: Knowledge of Docker, CI/CD pipelines Familiarity with cloud services (AWS/GCP) Exposure to Agile/Scrum methodology What You’ll Do: Work on real-time development projects from scratch Write clean, maintainable, and scalable code Collaborate with remote teams during US timings Independently handle assigned modules/features Continuously learn and adapt to new technologies Note: Academic/college projects will NOT be considered. Candidates must be able to show at least one independently built dynamic web app or website (with codebase and/or live demo).

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the AI Pod: Join our innovative AI Pod focused on revolutionizing performance marketing creatives. Our mission is to build cutting-edge internal tools powered by Artificial Intelligence to automate the creation of high-impact marketing videos. By cutting production time, enabling creatives at scale, and boosting ROI, this pod will directly empower our marketing teams to launch faster, test more rigorously, and drive significant performance improvements across platforms like Meta and Google. Role Summary: We are looking for two skilled and passionate AI Engineers to be foundational members of our AI Pod. You will be responsible for designing, developing, and implementing the core AI and Machine Learning models and systems that power our automated video creation platform. You will work closely with other engineers and a marketing stakeholder to translate creative requirements into intelligent, scalable solutions that deliver against our ambitious goals. Key Responsibilities: Develop and implement AI/ML models and algorithms for various stages of the automated video creation pipeline, including (but not limited to) text-to-video generation, voiceover synthesis, animation sequencing, and content summarization/copy generation. Integrate with and leverage external generative AI APIs and platforms (e.g., Text-to-Video tools like Runway, Pika Labs, Kaiber; Voice Generation tools like ElevenLabs, Play.ht; Language Models like OpenAI, Claude). Design and build data pipelines to process input assets (images, text, product info) and prepare them for AI model consumption. Contribute to the development of the "Smart Creative Engine" (Phase 2), focusing on generating creative variations based on audience, placement, and messaging through intelligent automation. Optimize models and pipelines for performance, scalability, and efficiency to ensure rapid video turnaround times. Collaborate with the fullstack engineer to build robust APIs and backend services that expose AI functionalities to the frontend and potential ad platform integrations. Work closely with the Marketing Stakeholder to understand creative needs, gather feedback, and iterate on AI-generated outputs. Stay up-to-date with the latest advancements in generative AI, computer vision, natural language processing, and automated content creation. Implement best practices for model training, evaluation, deployment, and monitoring. Contribute to code reviews and maintain high code quality standards. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, Artificial Intelligence, Machine Learning, or a related field, or equivalent practical experience. Proven experience developing and deploying AI/ML models in a production environment. Strong programming skills in Python. Proficiency with major AI/ML frameworks such as TensorFlow and/or PyTorch. Experience with libraries like Scikit-Learn, Keras, Matplotlib, SciPy, and potentially OpenCV or Tesseract. Familiarity with generative AI concepts and models (e.g., GANs, Transformers, Diffusion Models). Experience working with and integrating external APIs, particularly those related to AI/ML or content generation. Understanding of data processing, feature engineering, and model evaluation techniques. Ability to work effectively in an Agile, cross-functional team environment. Strong problem-solving skills and a creative approach to technical challenges. Bonus Points (Nice to Have): Experience with Text-to-Video generation techniques or platforms. Experience with Voice Generation (TTS) technologies or platforms. Familiarity with cloud platforms (AWS, GCP, Azure) for model deployment and scaling. Experience with containerization (Docker) and orchestration (Kubernetes). Prior experience in the AdTech or Marketing Technology domain. Experience with workflow automation tools like Zapier or Make. Understanding of video processing techniques. What We Offer: The opportunity to be part of a foundational AI Pod with a clear mission and direct impact on business growth. Work on exciting, cutting-edge applications of generative AI in a real-world marketing context. A fast-paced, collaborative, and innovative work environment. The chance to significantly influence the technical direction and success of the AI Pod.

Posted 1 month ago

Apply

23.0 years

0 - 5 Lacs

Noida

On-site

SynapseIndia is one of the top I.T. outsourcing companies based out of Noida with clients across the globe. We are 23+ years old organization and with expertise of more than two decades, we have worked with both large brands as well as startups. Why work with us? We are Microsoft Gold partner, Google partner, Shopify partner company with certified professionals. SynapseIndia is an MNC having clients and employees all over the world. SynapseIndia has 500 IT professionals and has plan to hire more than 500 more developers in the next 5 months. We have structured environment with industry leading CMMI level-5 compliant processes. Several IT professionals joined as developers/programmers and now they are leading a team . Salaries are always paid on time , from the time company started till date, there has never been a delay. Developers get an exposure to interact with international clients (Majorly USA clients) at a very early stage in career. Despite market conditions, we have not laid off people. We never work on holidays . This way, you can maintain your personal as well as professional life balance. Office timings are from 9:30 AM to 6:30 PM. 2nd and last Saturday of every month is off for everyone. Annual salary review – based on company and individual performance. This is permanent work from office job opportunity. Who are we looking for? Designation - Sr. Python Developer (AI ML) Experience Range - 5+ years What skills and experience are we looking for? Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or related field. 5+ years of professional experience in Python programming with a focus on AI/ML. Strong experience with Python ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost, etc. Solid understanding of machine learning algorithms, neural networks, and deep learning. Experience with data manipulation libraries (Pandas, NumPy) and data visualization tools (Matplotlib, Seaborn). Experience with cloud platforms (AWS, GCP, Azure) and deploying ML models using Docker, Kubernetes. Familiarity with NLP, Computer Vision, or other AI domains is a plus. Strong problem-solving skills and ability to work independently and collaboratively. Excellent communication skills. Experience with distributed computing and big data tools like Spark or Hadoop. Knowledge of MLOps best practices and tools. Familiarity with REST APIs and microservices architecture. Experience with version control systems like Git. Understanding of software development lifecycle (SDLC) and Agile methodologies. What is the work? Design, develop, and deploy machine learning models and AI algorithms using Python and relevant libraries. Collaborate with cross-functional teams to gather requirements and translate business problems into AI/ML solutions. Optimize and scale machine learning pipelines and systems for production. Perform data preprocessing, feature engineering, and exploratory data analysis. Implement and fine-tune deep learning models using frameworks like TensorFlow, PyTorch, or similar. Conduct experiments and evaluate model performance using statistical methods. Write clean, maintainable, and well-documented code. Mentor junior developers and participate in code reviews. Stay up-to-date with the latest AI/ML research and technologies. Ensure model deployment is seamless and models are integrated with existing infrastructure.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About the Role: We are seeking a passionate Data Scientist II to join our team. You’ll work on data gathering, exploratory data analysis (EDA), feature engineering, and developing machine learning models. The role involves writing production-level code, designing and deploying algorithms, and maintaining models in live environments. You’ll also manage multiple problems, mentor junior team members, and contribute to patent or publication writing. This is the job you are searching for if you love: ● Data gathering, EDA, feature engineering, and building models ● Writing idiomatic, production-level code and deploying it in live systems ● Designing algorithms using feature engineering, statistical modeling, and ML techniques ● Maintaining and optimizing models in production ● Working on multiple problem statements simultaneously ● Managing stakeholders—both internal and external ● Writing technical documents for patents and publications ● Mentoring junior data scientists You could be the next game changer if you: ● Can define problems based on product or client requirements ● Have strong coding skills—Python preferred ● Are highly proficient in libraries like pandas, numpy, matplotlib, sklearn, seaborn, nltk, scipy, etc. ● Can process data at scale and understand distributed systems ● Have 3–6 years of experience in analytics or data science roles ● Possess strong critical and creative thinking skills ● Have hands-on experience with PyTorch, Keras, and/or TensorFlow ● Bonus: Published patents or papers in national/international peer-reviewed journals or conferences ● Bonus: Experience with Apache Spark, Hadoop, or related tools ● Bonus: Background in consumer app or product domains

Posted 1 month ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Type: Full Time Experience: 0 Month to 2 Years Type: Virtual Hiring Last Date: 25-June-2025 Posted on: 26-May-2025 Education: BE/B.Tech,BSc Job Id: Aeries/092/25-26 Designation: Trainee - Data Science Location : Hyderabad Experience Range : 0 - 2 Years Qualification: Graduate in Data Science, Computer Science, Statistics, or related fields Shifts (if any): 11 am onwards (Candidate should be flexible to work as per business requirement) Role & Responsibilities Assist in collecting, cleaning, and analyzing large data sets Support the development and testing of ML models and data-driven algorithms Generate insights through data visualization and basic statistical analysis Write clean, maintainable Python code for data-related tasks Collaborate with cross-functional teams to understand requirements and deliver results Document findings, code, and processes clearly for internal use Key Skills & Tools Proficiency in Python and key libraries (NumPy, Pandas, Scikit-learn, Matplotlib, etc.) Understanding of basic statistical concepts and machine learning algorithms Familiarity with SQL and data manipulation Exposure to BI tools (like Power BI or Tableau) is a plus Good problem-solving ability and eagerness to learn Strong communication and collaboration skills Good To Have Certification or coursework in data science and machine learning Previous project work or internship experience in data science (preferred but not mandatory) Apply Now

Posted 1 month ago

Apply

2.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

About The Company Axis My India is India’s foremost Consumer Data Intelligence Company, which in partnership with Google is building a single-stop People Empowerment Platform, the ‘a’ app, that aims to change people’s awareness, accessibility, and utilization of a slew of services. At Axis, we are dedicated to making a tangible impact on the lives of millions. If you're passionate about creating meaningful changes and aren't afraid to get your hands dirty, we want you on our team! For more insights of the company, kindly visit our website https://www.axismyindia.org Role Overview Axis My India is seeking a skilled Data Analyst & Data Visualizer with at least 2 years of experience to join our team supporting the Axis My India “a” APP and custom projects. In this role, you will be responsible for analyzing, and interpreting data as well as designing impactful visualizations and dashboards. Your work will help drive data-driven decision-making, enhance user engagement, and support the company’s mission to connect and resolve problems for 250 million Indian households through innovative digital solutions. Key Responsibilities Collect, clean, and process data from multiple sources including databases, APIs, and third-party platforms to ensure data accuracy and reliability for the APP data and similarly for other custom projects. Analyze data to identify trends, patterns, and actionable insights that inform product development, market research, and social impact initiatives. Design and develop interactive dashboards and visualizations using tools such as Power BI or similar platforms to communicate findings clearly. Present complex data insights using fusion charts, concise reports and visual formats to technical and non-technical stakeholders. Collaborate closely with cross-functional teams including product, Technology, Operations and research to define analytics and visualization requirements. Monitor and report on key performance indicators (KPIs) relevant to the app’s usage, impact, and outreach. Ensure data integrity and governance throughout the data lifecycle. Stay updated on the latest analytics and visualization tools, techniques, and best practices to continuously improve the data experience for Axis My India projects and APP users. Creating the presentations, documents, reports based on requirement Required Skills & Qualifications Bachelor’s degree in Statistics, Mathematics, Computer Science, Data Science, Design, or a related field. Minimum 2 years of professional experience in data analysis and data visualization roles. Proficiency in Python and Power BI. Proficient with basic statistics. Experience with data visualization tools like Power BI, PowerPoint and excel. Strong analytical, problem-solving, and data storytelling skills. Knowledge of data blending, dashboard optimization, and UI/UX principles. Excellent communication and collaboration skills, with the ability to translate complex data into actionable insights. Ability to manage multiple projects and deliver results in a fast-paced environment. Preferred Experience Experience working with app-based data and multi-project analytics environments. Familiarity with analyzing and visualizing multiple survey data. Worked on basic statistics for data analysis Requirements Technical Skills Python: Data manipulation, analysis, and visualization with libraries like Pandas, NumPy, Matplotlib, and Seaborn. Power BI: Industry-standard tools for creating interactive dashboards and visual reports, enabling clear communication of insights to stakeholders. Matplotlib/Seaborn: Python libraries for custom visualizations and advanced charting. Excel: Useful for basic data analysis, quick visualizations, and reporting Proficiency in cleaning, preprocessing, and transforming raw data-handling missing values, outliers, duplicates, and standardizing formats to ensure data accuracy and reliability. Must be strong in using Statistical concepts for analysis and Visual design principles Analytical & Business Skills Critical Thinking & Problem Solving: Strong analytical mindset to interpret complex data, identify trends, and provide actionable insights. Data Transformation: Ability to transform complex data into compelling narratives that highlight key findings and support data-driven decision-making. Communication & Collaboration Effective Communication: Ability to present complex data insights clearly and succinctly to internal teams, stakeholders, including senior executives, through reports, dashboards and presentations. Working with Cross-functional teams: Experience collaborating with cross-functional teams, gathering requirements, and incorporating feedback to refine dashboards and reports. Benefits Competitive salary and benefits package Opportunity to make significant contributions to a dynamic company Evening snacks are provided by the company to keep you refreshed towards the end of the day Walking distance from Chakala metro station, making commuting easy and convenient. At Axis My India, we value discipline and focus. Our team members wear uniforms, adhere to a no-mobile policy during work hours, and work from our office with alternate Saturdays off. If you thrive in a structured environment and are committed to excellence, we encourage you to apply.

Posted 1 month ago

Apply

3.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are looking for a Business Intelligence Analyst who can organize and analyze data from various sources and create reports and metrics. He should be able to both deeply analyze data and succinctly explain metrics and insights from it to different stakeholders. Responsibilities As a BI Analyst, you will be responsible for managing various data and metrics crucial to our business operations, including company sales and revenues, customer-level metrics, and product usage metrics. Additionally, you will collaborate with finance teams, assisting in tasks such as invoicing and cash flow management. This role also involves close collaboration with product and operations teams to analyze product metrics and enhance overall product : Manage and analyze company sales, revenues, profits, and other KPIs. Analyse photographer-level metrics, e. g. sales, average basket, conversion rate, and growth. Analyse product usage data and collaborate with the product team to improve product adoption and performance. Perform ad-hoc analysis to support various business initiatives. Assist the finance team in invoicing, cash flow management, and revenue forecasting. Requirements Expert-level proficiency in Excel. Familiarity with one or more data visualization and exploration tools, e. g. Tableau, Kibana, Grafana, etc. Strong analytical and problem-solving skills. Comfortable with analyzing large amounts of data and finding patterns and anomalies. Excellent communication and collaboration abilities. Experience in SQL for data querying and manipulation. Experience in Python for data analysis, e. g. pandas, numpy, scipy, matplotlib, etc. Familiarity with statistical analysis techniques. Experience with basic accounting, e. g. balance sheet, P, and L statements, double-entry accounting, etc. Experience Required : 3- 6 years. (ref:hirist.tech)

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

What You'll Do SQL Development & Optimization : Write complex and optimized SQL queries, including advanced joins, subqueries, analytical functions, and stored procedures, to extract, manipulate, and analyze large datasets. Data Pipeline Management : Design, build, and support robust data pipelines to ensure timely and accurate data flow from various sources into our analytical platforms. Statistical Data Analysis : Apply a strong foundation in statistical data analysis to uncover trends, patterns, and insights from data, contributing to data-driven decision-making. Data Visualization : Work with various visualization tools (e.g., Google PLX, Tableau, Data Studio, Qlik Sense, Grafana, Splunk) to create compelling dashboards and reports that clearly communicate insights. Web Development Contribution : Leverage your experience in web development (HTML, CSS, jQuery, Bootstrap) to support data presentation layers or internal tools. Machine Learning Collaboration : Utilize your familiarity with ML tools and libraries (Scikit-learn, Pandas, NumPy, Matplotlib, NLTK) to assist in data preparation and validation for machine learning initiatives. Agile Collaboration : Work effectively within an Agile development environment, contributing to sprints and adapting to evolving requirements. Troubleshooting & Problem-Solving : Apply strong analytical and troubleshooting skills to identify and resolve data-related issues Skills Required : Expert in SQL (joins, subqueries, analytics functions, stored procedures) Experience building & supporting data pipelines Strong foundation in statistical data analysis Knowledge of visualization tools : Google PLX, Tableau, Data Studio, Qlik Sense, Grafana, Splunk, etc. Experience in web dev : HTML, CSS, jQuery, Bootstrap Familiarity with ML tools : Scikit-learn, Pandas, NumPy, Matplotlib, NLTK, and more Hands-on with Agile environments Strong analytical & troubleshooting skills Bachelor's in CS, Math, Stats, or equivalent (ref:hirist.tech)

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Title: Python Developer Experience: 1 – 3 Years Location: , Mumbai, Maharashtra (Work From Office) Shift Timings: Regular Shift: 10:00 AM – 6:00 PM We are hiring a Python Developer who involves developing and maintaining risk analytics tools and automating reporting processes to support commodity risk management. Key Responsibilities: Develop, test, and maintain Python scripts for data analysis and reporting Write scalable, clean code using Pandas, NumPy, Matplotlib, and OOPS principles Collaborate with risk analysts to implement process improvements Document workflows and maintain SOPs in Confluence Optimize code performance and adapt to evolving business needs Requirements: Strong hands-on experience with Python, Pandas, NumPy, Matplotlib, and OOPS Good understanding of data structures and algorithms Experience with Excel and VBA is an added advantage Exposure to financial/market risk environments is preferred Excellent problem-solving, communication, and documentation skills

Posted 1 month ago

Apply

1.0 - 7.0 years

0 Lacs

India

On-site

Data Scientist Experience: 1-7 years Location: Pune (Work From Office) Job Description: Strong background in machine learning (unsupervised and supervised techniques) with significant experience in text analytics/NLP. Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, etc. Strong programming ability in Python with experience in the Python data science ecosystem: Pandas, NumPy, SciPy, scikit-learn, NLTK, etc. Good knowledge of database query languages like SQL and experience with databases (PostgreSQL/MySQL/ Oracle/ MongoDB) Excellent verbal and written communication skills, Excellent analytical and problem-solving skills Degree in Computer Science, Engineering or relevant field is preferred. Proven Experience as Data Analyst or Data Scientist Good To Have: Familiarity with Hive, Pig and Scala. Experience in embeddings, Retrieval Augmented Generation (RAG), Gen AI Experience with Data Visualization Tools like matplotlib, plotly, seaborn, ggplot, etc. Experience with using cloud technologies on AWS/ Microsoft Azure. Job Type: Full-time Benefits: Provident Fund Work Location: In person

Posted 1 month ago

Apply

1.0 - 3.0 years

3 Lacs

Nagercoil

On-site

Job Title : Data Scientist Location : [Nagercoil] Job Type : Full-time Experience : 1-3 years (Freshers with good academic background can also apply) Salary : [1.5 - 2.5] LPA About the Role We are seeking a talented and analytical Data Scientist to join our team. You will be responsible for turning raw data into actionable insights that guide strategic decisions and product improvements. Ideal candidates should have strong problem-solving skills, proficiency in data science tools, and a passion for working with data. Key Responsibilities : Analyze large volumes of structured and unstructured data to find patterns and trends Build predictive models and machine learning algorithms Work closely with business teams to identify opportunities for leveraging data Communicate findings effectively through reports, dashboards, and visualizations Perform data cleaning, validation, and preprocessing Develop A/B testing frameworks and analyze test results Collaborate with software engineers and analysts to implement data-driven solutions Required Skills : Strong knowledge of Python or R for data analysis and machine learning Experience with libraries such as Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch Proficiency in SQL for data querying Hands-on experience with data visualization tools like Power BI, Tableau, or Matplotlib/Seaborn Understanding of statistics, probability, and algorithms Excellent analytical and problem-solving skills Preferred Qualifications : Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or related field Experience with big data tools like Hadoop or Spark is a plus Knowledge of cloud platforms such as AWS, Azure, or Google Cloud Job Types: Full-time, Permanent Pay: Up to ₹25,000.00 per month Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Position: Data Analyst Intern (Full-Time) Company: Lead India Location: Remote Stipend: ₹25,000/month Duration: 1–3 months (Full-Time Internship) About Lead India: Lead India is a forward-thinking technology company that helps businesses make smarter decisions through data. We provide meaningful internship opportunities for emerging professionals to gain real-world experience in data analysis, reporting, and decision-making. Role Overview: We are seeking a Data Analyst Intern to support our data and product teams in gathering, analyzing, and visualizing business data. This internship is ideal for individuals who enjoy working with numbers, identifying trends, and turning data into actionable insights. Key Responsibilities: Analyze large datasets to uncover patterns, trends, and insights Create dashboards and reports using tools like Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and analysis Assist in data cleaning, preprocessing, and validation Collaborate with cross-functional teams to support data-driven decisions Document findings and present insights to stakeholders Skills We're Looking For: Strong analytical and problem-solving skills Basic knowledge of SQL and data visualization tools (Power BI, Tableau, or Excel) Familiarity with Python for data analysis (pandas, matplotlib) is a plus Good communication and presentation skills Detail-oriented with a willingness to learn and grow What You’ll Gain: ₹25,000/month stipend Real-world experience in data analysis and reporting Mentorship from experienced analysts and developers Remote-first, collaborative work environment Potential for a Pre-Placement Offer (PPO) based on performance

Posted 1 month ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Job Title: Data Science Trainer Location: B-11, SECTOR 2 NOIDA – Work from Office Only Job Type: Full-Time Experience: 0–2 Years Salary: 20k -30k Immediate Joiners Preferred Job Summary: We are looking for a passionate and knowledgeable Data Science Trainer to join our team. This is an in-office role, ideal for individuals who have strong foundational knowledge in data analytics and data science and are eager to train and mentor aspiring professionals. Freshers with solid subject knowledge and a passion for teaching are welcome to apply. ❗ Note: This is a 100% on-site position. Remote work is not available. If you are only looking for remote opportunities, please do not apply. Key Responsibilities: Deliver classroom training sessions on Data Science, Data Analytics, and related tools and technologies. Develop and update training materials, assignments, and projects. Provide hands-on demonstrations and practical exposure using real-time datasets. Clarify students’ doubts, conduct assessments, and provide constructive feedback. Stay updated with the latest trends in data science and analytics. Monitor learner progress and suggest improvements. Required Skills: Strong knowledge in Python , Data Analysis , Statistics , Machine Learning , Pandas , NumPy , and Matplotlib/Seaborn . Understanding of tools such as Jupyter Notebook , Excel , and SQL is a plus. Good communication and presentation skills. Passion for teaching and mentoring students. Eligibility: Graduates in Computer Science, Statistics, Mathematics, or related fields. Freshers with excellent knowledge in Data Science are encouraged to apply. Candidates must be willing to work on-site at our office location. How to Apply: Interested candidates should apply immediately with their updated resume. Shortlisted candidates will be contacted for a personal interview.

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Job Title: Data Scientist – OpenCV. Experience: 2–3 Years. Location: Bangalore. Notice Period: Immediate Joiners Only. Job Overview. We are looking for a passionate and driven Data Scientist with a strong foundation in computer vision, image processing, and OpenCV. This role is ideal for professionals with 2–3 years of experience who are excited about working on real-world visual data problems and eager to contribute to impactful projects in a collaborative environment.. Key Responsibilities. Develop and implement computer vision solutions using OpenCV and Python.. Work on tasks including object detection, recognition, tracking, and image/video enhancement.. Clean, preprocess, and analyze large image and video datasets to extract actionable insights.. Collaborate with senior data scientists and engineers to deploy models into production pipelines.. Contribute to research and proof-of-concept projects in the field of computer vision and machine learning.. Prepare clear documentation for models, experiments, and technical processes.. Required Skills. Proficient in OpenCV and image/video processing techniques.. Strong coding skills in Python, with familiarity in libraries such as NumPy, Pandas, Matplotlib.. Solid understanding of basic machine learning and deep learning concepts.. Hands-on experience with Jupyter Notebooks; exposure to TensorFlow or PyTorch is a plus.. Excellent analytical, problem-solving, and debugging skills.. Effective communication and collaboration abilities.. Preferred Qualifications. Bachelor’s degree in computer science, Data Science, Electrical Engineering, or a related field.. Practical exposure through internships or academic projects in computer vision or image analysis.. Familiarity with cloud platforms (AWS, GCP, Azure) is an added advantage.. What We Offer. A dynamic and innovation-driven work culture.. Guidance and mentorship from experienced data science professionals.. The chance to work on impactful, cutting-edge projects in computer vision.. Competitive compensation and employee benefits.. Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Job Summary. ServCrust is a rapidly growing technology startup with the vision to revolutionize India's infrastructure. by integrating digitization and technology throughout the lifecycle of infrastructure projects.. About The Role. As a Data Science Engineer, you will lead data-driven decision-making across the organization. Your. responsibilities will include designing and implementing advanced machine learning models, analyzing. complex datasets, and delivering actionable insights to various stakeholders. You will work closely with. cross-functional teams to tackle challenging business problems and drive innovation using advanced. analytics techniques.. Responsibilities. Collaborate with strategy, data engineering, and marketing teams to understand and address business requirements through advanced machine learning and statistical models.. Analyze large spatiotemporal datasets to identify patterns and trends, providing insights for business decision-making.. Design and implement algorithms for predictive and causal modeling.. Evaluate and fine-tune model performance.. Communicate recommendations based on insights to both technical and non-technical stakeholders.. Requirements. A Ph.D. in computer science, statistics, or a related field. 5+ years of experience in data science. Experience in geospatial data science is an added advantage. Proficiency in Python (Pandas, Numpy, Sci-Kit Learn, PyTorch, StatsModels, Matplotlib, and Seaborn); experience with GeoPandas and Shapely is an added advantage. Strong communication and presentation skills. Show more Show less

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About the Company: Brakes India is at the forefront of leveraging AI-driven analytics in manufacturing. We are setting up an Advanced Analytics team to drive data-driven decision-making across the organization. About the Role: We are looking for an experienced Senior Data Scientist who will be part of this new Center of Excellence (CoE) and help build a high-performing analytics team. In this role, you will leverage your expertise in data analysis, machine learning, and artificial intelligence to drive insights and develop innovative solutions. You will collaborate with business SMEs and other IT divisions to identify business challenges and implement data-driven strategies that enhance our products and processes, and helps grow our business. Responsibilities: The Senior Data Scientist will Collect, clean, and analyze large datasets to extract meaningful insights. Utilize statistical methods to interpret data and identify trends. Design, develop, and implement machine learning models and algorithms tailored to specific business needs. Optimize models for performance and accuracy. Work closely with stakeholders to define project goals and deliver actionable insights. Stay up-to-date with the latest Al/ML trends, tools, and technologies. Experiment with new approaches to enhance our data science capabilities. Present findings and recommendations to both technical and non-technical audiences. Prepare clear documentation for methodologies and results. Develop metrics to assess the effectiveness of models and solutions. Continuously monitor and improve model performance. Lead the CoE to meet its stated objectives for formulating policies, standards, ethics, tools, technology stack and procedures around the use of AI and ML in the organization, among others. Mentor data scientists and support in organizational skilling in AI. Qualifications: Master’s or bachelor’s degree in computer science, Data Science, Statistics, or a related field. Required Skills: 5-7 years' experience in data science, machine learning, or artificial intelligence, with a strong portfolio of projects. Proficiency in programming languages such as Python, R, or Java, SQL Experience in ML Platforms like Azure Machine Learning, Databricks, Azure Data Factory and with ML libraries (e.g., TensorFlow, PyTorch, scikit-learn). Expertise in working with Big Data technologies (eg Hadoop, Spark) & databases like Azure Data Lake, Delta Lake, Snowflake. Strong in MLOps & Deployment: Model training, versioning, monitoring, and deployment. Strong statistical analysis skills and experience with data visualization tools (e.g., Power BI, Matplotlib, Seaborn). Experience with deep learning frameworks and natural language processing (NLP). Understanding of business process and ability to translate business needs into technical requirements. Experience in AI/ML deployment Manufacturing industry is highly desired. Excellent problem-solving and critical-thinking skills. Strong communication and interpersonal skills. Experience in setting up CoEs or analytics practices is a plus. Why Join Us? Opportunity to be part of an Advanced Analytics CoE setup. Work on cutting-edge AI & ML projects in manufacturing. Direct impact on business-critical decisions and process optimization. Collaborative work culture with a focus on innovation and growth.

Posted 1 month ago

Apply

2.0 - 7.0 years

8 - 18 Lacs

Pune, Sonipat

Work from Office

About the Role Overview: Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As Indias first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Engineer + Associate Instructor Data Mining to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering and Anomaly Detections. Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. Contribute to the academic and research environment of the department and the university. Required Qualifications: A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. Excellent communication and interpersonal skills. Preferred Qualifications: A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: Competitive salary packages aligned with industry standards. Access to state-of-the-art labs and classroom facilities. To know more about us, feel free to explore our website: Newton School of Technology We look forward to the possibility of having you join our academic team and help shape the future of tech education!

Posted 1 month ago

Apply

3.0 - 4.0 years

4 - 9 Lacs

Hyderābād

On-site

Job Title: Senior Python Developer – Trading Systems & Market Data Experience: 3–4 Years Location: Hyderabad, Telangana (On-site) Employment Type: Full-Time About the Role: We are seeking a Senior Python Developer with 3–4 years of experience and a strong understanding of stock market dynamics, technical indicators, and trading systems. You’ll take ownership of backtesting frameworks, strategy optimization, and developing high-performance, production-ready trading modules. The ideal candidate is someone who can think critically about trading logic, handle edge cases with precision, and write clean, scalable, and testable code. You should be comfortable working in a fast-paced, data-intensive environment where accuracy and speed are key. Key Responsibilities: Design and maintain robust backtesting and live trading frameworks. Build modules for strategy development, simulation, and optimization. Integrate with real-time and historical market data sources (e.g., APIs, databases). Use libraries like Pandas, NumPy, TA-Lib, Matplotlib, SciPy, etc., for data processing and signal generation. Apply statistical methods to validate strategies (mean, regression, correlation, standard deviation, etc.). Optimize code for low-latency execution and memory efficiency. Collaborate with traders and quants to implement and iterate on ideas. Use Git and manage codebases with best practices (unit testing, modular design, etc.). Required Skills & Qualifications: 3–4 years of Python development experience, especially in data-intensive environments. Strong understanding of algorithms, data structures, and performance optimization. Hands-on with technical indicators, trading strategy design, and data visualization. Proficient with Pandas, NumPy, Matplotlib, SciPy, TA-Lib, etc. Strong SQL skills and experience working with structured and time-series data. Exposure to REST APIs, data ingestion pipelines, and message queues (e.g., Kafka, RabbitMQ) is a plus. Experience in version control systems (Git) and collaborative development workflows. Preferred Experience: Hands-on experience with trading platforms or algorithmic trading systems. Familiarity with order management systems (OMS), execution logic, or market microstructure. Prior work with cloud infrastructure (AWS, GCP) or Docker/Kubernetes. Knowledge of machine learning or reinforcement learning in financial contexts is a bonus. What You’ll Get: Opportunity to work on real-world trading systems with measurable impact. A collaborative and fast-paced environment. A role where your ideas directly translate to production and trading performance. Job Type: Full-time Pay: ₹400,000.00 - ₹900,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

3.0 years

5 - 6 Lacs

Gurgaon

On-site

Gurgaon 4 3+ Years Full Time We are looking for a technically adept and instructionally strong AI Developer with core expertise in Python, Large Language Models (LLMs), prompt engineering, and vector search frameworks such as FAISS, LlamaIndex, or RAG-based architectures. The ideal candidate combines solid foundations in data science, statistics, and machine learning development with a hands-on understanding of ML DevOps, model selection, and deployment pipelines. 3–4 years of experience in applied machine learning or AI development, including at least 1–2 years working with LLMs, prompt engineering, or vector search systems. Core Skills Required: Python: Advanced-level expertise in scripting, data manipulation, and model development LLMs (Large Language Models): Practical experience with GPT, LLaMA, Mistral, or open- source transformer models Prompt Engineering: Ability to design, optimize, and instruct on prompt patterns for various use cases Vector Search & RAG: Understanding of feature vectors, nearest neighbor search, and retrieval-augmented generation (RAG) using tools like FAISS, Pinecone, Chroma, or Weaviate LlamaIndex: Experience building AI applications using LlamaIndex, including indexing documents and building query pipelines Rack Knowledge: Familiarity with RACK architecture, model placement, and scaling on distributed hardware ML / ML DevOps: Knowledge of full ML lifecycle including feature engineering, model selection, training, and deployment Data Science & Statistics: Solid grounding in statistical modeling, hypothesis testing, probability, and computing concepts Responsibilities: Design and develop AI pipelines using LLMs and traditional ML models Build, fine-tune, and evaluate large language models for various NLP tasks Design prompts and RAG-based systems to optimize output relevance and factual grounding Implement and deploy vector search systems integrated with document knowledge bases Select appropriate models based on data and business requirements Perform data wrangling, feature extraction, and model training Develop training material, internal documentation, and course content (especially around Python and AI development using LlamaIndex) Work with DevOps to deploy AI solutions efficiently using containers, CI/CD, and cloud infrastructure Collaborate with data scientists and stakeholders to build scalable, interpretable solutions Maintain awareness of emerging tools and practices in AI and ML ecosystems Preferred Tools & Stack: Languages: Python, SQL ML Frameworks: Scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers Vector DBs: FAISS, Pinecone, Chroma, Weaviate RAG Tools: LlamaIndex, LangChain ML Ops: MLflow, DVC, Docker, Kubernetes, GitHub Actions Data Tools: Pandas, NumPy, Jupyter Visualization: Matplotlib, Seaborn, Streamlit Cloud: AWS/GCP/Azure (S3, Lambda, Vertex AI, SageMaker) Ideal Candidate: Background in Data Science, Statistics, or Computing Passionate about emerging AI tech, LLMs, and real-world applications Demonstrates both hands-on coding skills and teaching/instructional abilities Capable of building reusable, explainable AI solutions Location gurgaon sector 49

Posted 1 month ago

Apply

0 years

10 - 30 Lacs

Sonipat

Remote

Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Mining Engineer to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: ● Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. ● Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. ● Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering, and Anomaly Detections. ● Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. ● Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). ● Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). ● Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. ● Contribute to the academic and research environment of the department and the university. Required Qualifications: ● A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. ● Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. ● Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). ● Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. ● Excellent communication and interpersonal skills. Preferred Qualifications: ● A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. ● Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. ● Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). ● Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: ● Competitive salary packages aligned with industry standards. ● Access to state-of-the-art labs and classroom facilities. ● To know more about us, feel free to explore our website: Newton School of Technology. We look forward to the possibility of having you join our academic team and help shape the future of tech education! Job Type: Full-time Pay: ₹1,000,000.00 - ₹3,000,000.00 per year Benefits: Food provided Health insurance Leave encashment Paid sick time Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Application Question(s): Are you interested in a full-time time onsite Instructor role? Are you ready to relocate to Sonipat - NCR Delhi? Are you ready to relocate to Pune? Work Location: In person Expected Start Date: 15/07/2025

Posted 1 month ago

Apply

1.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Title: Bioinformatician Date: 20 Jun 2025 Job Location: Bangalore Pay Grade Year of Experience: Job Title: Bioinformatician Job Location: Bangalore About Syngene: Syngene ( www.syngeneintl.com ) is an innovation-led contract research, development and manufacturing organization offering integrated scientific services from early discovery to commercial supply. At Syngene, safety is at the heart of everything we do personally and professionally. Syngene has placed safety at par with business performance with shared responsibility and accountability, including following safety guidelines, procedures, and SOPs, in letter and spirit Overall adherence to safe practices and procedures of oneself and the teams aligned Contributing to development of procedures, practices and systems that ensures safe operations and compliance to company’s integrity & quality standards Driving a corporate culture that promotes environment, health, and safety (EHS) mindset and operational discipline at the workplace at all times. Ensuring safety of self, teams, and lab/plant by adhering to safety protocols and following environment, health, and safety (EHS) requirements at all times in the workplace. Ensure all assigned mandatory trainings related to data integrity, health, and safety measures are completed on time by all members of the team including self Compliance to Syngene’ s quality standards at all times Hold self and their teams accountable for the achievement of safety goals Govern and Review safety metrics from time to time We are seeking a highly skilled and experienced computational biologist to join our team. The ideal candidate will have a proven track record in multi-omics data analysis. They will be responsible for integrative analyses and contributing to the development of novel computational approaches to uncover biological insights. Experience: 1-4 years Core Purpose of the Role To support data-driven biological research by performing computational analysis of omics data, and generating translational insights through bioinformatics tools and pipelines. Position Responsibilities Conduct comprehensive analyses of multi-omics datasets, including genomics, transcriptomics, proteomics, metabolomics, and epigenomics. Develop computational workflows to integrate various -omics data to generate inference and hypotheses for testing. Conduct differential expression and functional enrichment analyses. Implement and execute data processing workflows and automate the pipelines with best practices for version control, modularization, and documentation. Apply advanced multivariate data analysis techniques, including regression, clustering, and dimensionality reduction, to uncover patterns and relationships in large datasets. Collaborate with researchers, scientists, and other team members to translate computational findings into actionable biological insights. Educational Qualifications Master’s degree in bioinformatics. Mandatory Technical Skills Programming: Proficiency in Python for data analysis, visualization, and pipeline development. Multi-omics analysis: Proven experience in analyzing and integrating multi-omics datasets. Statistics: Knowledge of probability distributions, correlation analysis, and hypothesis testing. Data visualization: Strong understanding of data visualization techniques and tools (e.g., ggplot2, matplotlib, seaborn). Preferred Machine learning: Familiarity with AI/ML concepts Behavioral Skills Excellent communication skills Objective thinking Problem solving Proactivity Syngene Values All employees will consistently demonstrate alignment with our core values Excellence Integrity Professionalism Equal Opportunity Employer It is the policy of Syngene to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by applicable legislation or local law. In addition, Syngene will provide reasonable accommodations for qualified individuals with disabilities.

Posted 1 month ago

Apply

2.0 - 3.0 years

0 Lacs

Greater Chennai Area

On-site

Roles & Responsibilities To impart training and monitor the student life cycle for ensuring standard outcome. Conduct live-in person/virtual classes to train learners on Adv. Excel, Power BI, Python and adv Python libraries such as Numpy,Matplotlib, Pandas,seaborn, SciPy, SQL-MySQL, Data Analysis,basic statistical knowledge. Facilitate and support learners progress/journey to deliver personalized blended learning experience and achieve desired skill outcome Evaluate and grade learners Project Report, Project Presentation and other documents. Mentor learners during support, project and assessment sessions. Develop, validate and implement learning content, curriculum and training programs whenever applicable Liaison and support respective teams with schedule planning, learner progress, academic evaluation, learning management, etc Desired profile: 2-3 year of technical training exp in a corporate, or any ed-tech institute. (Not to source college lecturer, school teacher profile) Must be proficient in Adv. Excel, Power BI, Python and adv Python libraries such as Numpy. Matplotlib, Pandas, SciPy, Seaborn,SQL-MySQL, Data Analysis,basic statistical knowledge. Experience in training in Data Analysis Should have worked in as Data Analyst Must have good analysis or problem-solving skills Must have good communication and delivery skills Good Knowledge of Database (SQL, MySQL) Additional Advantage: Knowledge of Flask, Core Java

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The Data Science Internship is designed to equip you with the skills required to extract insights, build predictive models, and solve complex problems using data. Role Overview: As a Data Science Intern, you will work on real-world datasets to develop machine learning models, perform data wrangling, and generate actionable insights. This internship will help you strengthen your technical foundation in data science while working on projects that have a tangible business impact. Key Responsibilities: Collect, clean, and preprocess data from various sources Apply statistical methods and machine learning techniques to extract insights Build and evaluate predictive models for classification, regression, or clustering tasks Visualize data using libraries like Matplotlib, Seaborn, or tools like Power BI Document findings and present results to stakeholders in a clear and concise manner Collaborate with team members on data-driven projects and innovations Qualifications: Pursuing or recently completed a degree in Data Science, Computer Science, Mathematics, or a related field Proficiency in Python and data science libraries (NumPy, Pandas, Scikit-learn, etc.) Understanding of statistical analysis and machine learning algorithms Familiarity with SQL and data visualization tools or libraries Strong analytical, problem-solving, and critical thinking skills Eagerness to learn and apply data science techniques to solve real-world problems Internship Benefits: Hands-on experience with real datasets and end-to-end data science projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of data science projects and models

Posted 1 month ago

Apply

0.0 - 4.0 years

4 - 9 Lacs

Hyderabad, Telangana

On-site

Job Title: Senior Python Developer – Trading Systems & Market Data Experience: 3–4 Years Location: Hyderabad, Telangana (On-site) Employment Type: Full-Time About the Role: We are seeking a Senior Python Developer with 3–4 years of experience and a strong understanding of stock market dynamics, technical indicators, and trading systems. You’ll take ownership of backtesting frameworks, strategy optimization, and developing high-performance, production-ready trading modules. The ideal candidate is someone who can think critically about trading logic, handle edge cases with precision, and write clean, scalable, and testable code. You should be comfortable working in a fast-paced, data-intensive environment where accuracy and speed are key. Key Responsibilities: Design and maintain robust backtesting and live trading frameworks. Build modules for strategy development, simulation, and optimization. Integrate with real-time and historical market data sources (e.g., APIs, databases). Use libraries like Pandas, NumPy, TA-Lib, Matplotlib, SciPy, etc., for data processing and signal generation. Apply statistical methods to validate strategies (mean, regression, correlation, standard deviation, etc.). Optimize code for low-latency execution and memory efficiency. Collaborate with traders and quants to implement and iterate on ideas. Use Git and manage codebases with best practices (unit testing, modular design, etc.). Required Skills & Qualifications: 3–4 years of Python development experience, especially in data-intensive environments. Strong understanding of algorithms, data structures, and performance optimization. Hands-on with technical indicators, trading strategy design, and data visualization. Proficient with Pandas, NumPy, Matplotlib, SciPy, TA-Lib, etc. Strong SQL skills and experience working with structured and time-series data. Exposure to REST APIs, data ingestion pipelines, and message queues (e.g., Kafka, RabbitMQ) is a plus. Experience in version control systems (Git) and collaborative development workflows. Preferred Experience: Hands-on experience with trading platforms or algorithmic trading systems. Familiarity with order management systems (OMS), execution logic, or market microstructure. Prior work with cloud infrastructure (AWS, GCP) or Docker/Kubernetes. Knowledge of machine learning or reinforcement learning in financial contexts is a bonus. What You’ll Get: Opportunity to work on real-world trading systems with measurable impact. A collaborative and fast-paced environment. A role where your ideas directly translate to production and trading performance. Job Type: Full-time Pay: ₹400,000.00 - ₹900,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies