Jobs
Interviews

1558 Matplotlib Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Senior ML Engineer Minimum 4 to 8+ years of experience in ML development in Product Based Company Location: Bangalore (Onsite) Why should you choose us? Rakuten Symphony is a Rakuten Group company, that provides global B2B services for the mobile telco industry and enables next-generation, cloud-based, international mobile services. Building on the technology Rakuten used to launch Japan’s newest mobile network, we are taking our mobile offering global. To support our ambitions to provide an innovative cloud-native telco platform for our customers, Rakuten Symphony is looking to recruit and develop top talent from around the globe. We are looking for individuals to join our team across all functional areas of our business – from sales to engineering, support functions to product development. Let’s build the future of mobile telecommunications together! Required Skills and Expertise: Candidate Must have exp. Working in Product Based Company. Should be able to Build, train, and optimize deep learning models with TensorFlow, Keras, PyTorch, and Transformers. Should have exp. In Manipulate and analyse large-scale datasets using Python, Pandas, Numpy, Dask Apply advanced fine-tuning techniques (Full Fine-Tuning, PEFT) and strategies to large language and vision models. Implement and evaluate classical machine learning algorithms using scikit-learn, statsmodels, XGBoost etc. Develop and deploy scalable APIs for ML models using FastAPI. Should have exp. In performing data visualization and exploratory data analysis with Matplotlib, Seaborn, Plotly, and Bokeh. Collaborate with cross-functional teams to deliver end-to-end ML solutions. Deploy machine learning models for diverse business applications over the cloud native and on-premise Hands-on experience with Docker for containerization and Kubernetes for orchestration and scalable deployment of ML models. Familiarity with CI/CD pipelines and best practices for deploying and monitoring ML models in production. Stay current with the latest advancements in machine learning, deep learning, and AI. Our commitment to you: - Rakuten Group’s mission is to contribute to society by creating value through innovation and entrepreneurship. By providing high-quality services that help our users and partners grow, - We aim to advance and enrich society. - To fulfill our role as a Global Innovation Company, we are committed to maximizing both corporate and shareholder value. RAKUTEN SHUGI PRINCIPLES: Our worldwide practices describe specific behaviours that make Rakuten unique and united across the world. We expect Rakuten employees to model these 5 Shugi Principles of Success. Always improve, always advance . Only be satisfied with complete success - Kaizen. Be passionately professional . Take an uncompromising approach to your work and be determined to be the best. Hypothesize - Practice - Validate - Shikumika. Use the Rakuten Cycle to success in unknown territory. Maximize Customer Satisfaction . The greatest satisfaction for workers in a service industry is to see their customers smile. Speed!! Speed!! Speed!! Always be conscious of time. Take charge, set clear goals, and engage your team.

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

BTM Layout, Bengaluru, Karnataka

On-site

Job Title: Python Developer – Machine Learning & AI (2–3 Years Experience) Job Summary: We are seeking a skilled and motivated Python Developer with 2 to 3 years of experience in Machine Learning and Artificial Intelligence. The ideal candidate will have hands-on experience in developing, training, and deploying machine learning models, and should be proficient in Python and associated data science libraries. You will work with our data science and engineering teams to build intelligent solutions that solve real-world problems. Key Responsibilities: Develop and maintain machine learning models using Python. Work on AI-driven applications, including predictive modeling, natural language processing, and computer vision (based on project requirements). Collaborate with cross-functional teams to understand business requirements and translate them into ML solutions. Preprocess, clean, and transform data for training and evaluation. Perform model training, tuning, evaluation, and deployment using tools like scikit-learn, TensorFlow, or PyTorch. Write modular, efficient, and testable code. Document processes, models, and experiments clearly for team use and future reference. Stay updated with the latest trends and advancements in AI and machine learning. Required Skills: 2–3 years of hands-on experience with Python programming. Solid understanding of machine learning algorithms (supervised, unsupervised, and reinforcement learning). Experience with libraries such as scikit-learn , pandas , NumPy , Matplotlib , and Seaborn . Exposure to deep learning frameworks like TensorFlow , Keras , or PyTorch . Good understanding of data structures and algorithms. Experience with model evaluation techniques and performance metrics. Familiarity with Jupyter Notebooks, version control (Git), and cloud platforms (AWS, GCP, or Azure) is a plus. Strong analytical and problem-solving skills. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or related field. Experience with deploying ML models using Flask , FastAPI , or Docker . Knowledge of MLOps and model lifecycle management is an advantage. Understanding of NLP or Computer Vision is a plus. Job Type: Full-time Pay: Up to ₹700,000.00 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Ability to commute/relocate: BTM Layout, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Solid understanding of machine learning algorithms (supervised, unsupervised, and reinforcement learning). Experience with libraries such as scikit-learn, pandas, NumPy, Matplotlib, and Seaborn. Exposure to deep learning frameworks like TensorFlow, Keras, or PyTorch. Familiarity with Jupyter Notebooks, version control (Git), and cloud platforms (AWS, GCP, or Azure) is a plus. Experience with deploying ML models using Flask, FastAPI, or Docker. what is your CTC ( in lpa ) What is your Expected CTC ( in lpa ) what is your notice period Location: BTM Layout, Bengaluru, Karnataka (Required) Work Location: In person Application Deadline: 06/07/2025

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. About The Role We are looking for a Python Full Stack Developer with strong Azure DevOps and AI integration expertise to support the automation of Kanban workflows and real-time analytics in a scaled agile environment. You will design end-to-end automation for case management, build performance dashboards, and integrate AI-powered solutions using Azure OpenAI, Dataverse, and Power BI. The role requires a deep understanding of Python development, experience with Azure services, and the ability to collaborate with cross-functional teams to deliver high-quality solutions. Key Responsibilities Develop Python applications to automate Kanban case management integrated with Azure DevOps (ADO) Build and maintain REST APIs with access control for project and workload metrics Integrate Azure OpenAI services to automate delay analysis and generate custom summaries Design interactive dashboards using Python libraries (Pandas, Plotly, Dash) and Power BI Store, manage, and query data using Dataverse for workflow reporting and updates Leverage Microsoft Graph API and Azure SDKs for system integration and access control Collaborate with IT security, PMs, and engineering teams to gather requirements and deliver automation solutions Continuously improve security workflows, report generation, and system insights using AI and data modeling Required Skills & Experience 5+ years of Python development experience with FastAPI or Flask Hands-on experience with Azure DevOps, including its REST APIs Proficiency in Azure OpenAI, Azure SDKs, and Microsoft Graph API Strong understanding of RBAC (Role-Based Access Control) and permissions management Experience with Power BI, Dataverse, and Python data visualization libraries (Matplotlib, Plotly, Dash) Prior experience in Agile teams and familiarity with Scrum/Kanban workflows Excellent communication and documentation skills; able to explain technical concepts to stakeholders Benefits And Perks Opportunity to work with leading global clients Flexible work arrangements with remote options Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: pandas,azure sdks,microsoft graph api,rest apis,ai integration,microsoft power bi,power bi,dash,python,azure devops,fastapi,dataverse,rbac,azure openai,plotly,flask,matplotlib

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

For an international project in Chennai, we are urgently looking for a Full Remote Python Full Stack Developer. We are looking for a motivated contractor. Candidates need to be fluent in English. Tasks and responsibilities: Write Python programs to perform automated ADO Kanban case management solution, dashboards and reports; Develop and integrate Rest APIs with access control to provide case status and reports to specific to LOBs, Managers, etc.; Utilize Azure OpenAI frameworks to enhance delay analysis, vulnerability dashboard and reporting; Build dashboards using Python libraries (e.g., Pandas, Matplotlib, Plotly) to track case status from Kanban boards, delay per project/LOB/etc., Use Dataverse and PowerBi for data modelling and reporting as well; Collaboration and Support: Work closely with project managers, IT security staff, and system administrators to gather requirements, understand business needs, and develop solutions that improve security processes; Continuously evaluate and improve Kanban case management solution, leveraging new technologies and techniques, particularly AI and automation, to improve efficiency and effectiveness; Profile: Bachelor or Master degree; +5 years of hands-on experience with Python, particularly in frameworks like FastAPI, Flask, and experience using Azure OpenAI frameworks; Strong understanding of access control models such as Role-Based Access Control (RBAC); Expertise working in Azure DevOps and its Rest APIs for customizing it; Proficiency with Azure cloud services, Microsoft Graph API, and experience integrating Python applications; Experience in Dataverse, Power BI and reporting libraries in Python (Pandas, Matplotlib, Plotly, Dash) to build dashboards and reports; Ability to collaborate with various stakeholders, explain complex technical solutions, and deliver high-quality solutions on time; Experience working in Agile environments and familiarity with Scrum and Kanban methodologies for delivering the solutions; Fluent in English;

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. The Defender Experts (DEX) Research team is at the forefront of Microsoft’s threat protection strategy, combining world-class hunting expertise with AI-driven analytics to protect customers from advanced cyberattacks. Our mission is to move protection left—disrupting threats early, before damage occurs—by transforming raw signals into intelligence that powers detection, disruption, and customer trust. We’re looking for a passionate and curious Data Scientist to join this high-impact team. In this role, you'll partner with researchers, hunters, and detection engineers to explore attacker behavior, operationalize entity graphs, and develop statistical and ML-driven models that enhance DEX’s detection efficacy. Your work will directly feed into real-time protections used by thousands of enterprises and shape the future of Microsoft Security. This is an opportunity to work on problems that matter—with cutting-edge data, a highly collaborative team, and the scale of Microsoft behind you. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Understand complex cybersecurity and business problems, translate them into well-defined data science problems, and build scalable solutions. Design and build robust, large-scale graph structures to model security entities, behaviors, and relationships. Develop and deploy scalable, production-grade AI/ML systems and intelligent agents for real-time threat detection, classification, and response. Collaborate closely with Security Research teams to integrate domain knowledge into data science workflows and enrich model development. Drive end-to-end ML lifecycle: from data ingestion and feature engineering to model development, evaluation, and deployment. Work with large-scale graph data: create, query, and process it efficiently to extract insights and power models. Lead initiatives involving Graph ML, Generative AI, and agent-based systems, driving innovation across threat detection, risk propagation, and incident response. Collaborate closely with engineering and product teams to integrate solutions into production platforms. Mentor junior team members and contribute to strategic decisions around model architecture, evaluation, and deployment. Qualifications Bachelor’s or Master’s degree in Computer Science, Statistics, Applied Mathematics, Data Science, or a related quantitative field 6+ years of experience applying data science or machine learning in a real-world setting, preferably in security, fraud, risk, or anomaly detection Proficiency in Python and/or R, with hands-on experience in data manipulation (e.g., Pandas, NumPy), modeling (e.g., scikit-learn, XGBoost), and visualization (e.g., matplotlib, seaborn) Strong foundation in statistics, probability, and applied machine learning techniques Experience working with large-scale datasets, telemetry, or graph-structured data Ability to clearly communicate technical insights and influence cross-disciplinary teams Demonstrated ability to work independently, take ownership of problems, and drive solutions end-to-end Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 month ago

Apply

1.0 - 3.0 years

10 - 20 Lacs

Chandigarh

Work from Office

About the Role: We are seeking a highly motivated AI/ML Engineer with 13 years of experience to join a fast-growing product-based team in Chandigarh. You will be part of a talented group working on building AI-powered solutions, predictive models, and intelligent automation tools for real-world business applications. Key Responsibilities: Design and implement machine learning models and algorithms Work with data teams to preprocess and clean datasets Train, evaluate, and fine-tune models for classification, regression, and clustering problems Collaborate with product and engineering teams to integrate ML models into production Conduct research on state-of-the-art AI/ML trends and apply best practices Document model performance, experiments, and key metrics Required Skills: 1-3 years of hands-on experience in AI/ML model development Proficient in Python and ML libraries (e.g., scikit-learn, TensorFlow, Keras, PyTorch) Solid understanding of data structures, algorithms, and ML concepts Experience with Pandas, NumPy, Matplotlib, etc. Good understanding of model evaluation, overfitting/underfitting, cross-validation Strong problem-solving and communication skills Good to Have: Experience with NLP, Computer Vision, or Deep Learning Exposure to MLOps or cloud platforms (AWS, GCP, Azure) Familiarity with Flask or FastAPI for deploying ML models Version control tools like Git Why Join Us? Work on real-world AI applications in a fast-paced environment Collaborate with a tech-driven and passionate team Excellent growth opportunities and career progression Competitive compensation and flexible work culture

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Location: Remote (Work from Home) Duration: 3–6 Months Stipend: Performance-Based Upto 15,000 Company: Zeno Talent Department: Artificial Intelligence & Data Science PPO Opportunity: Yes – High-performing interns will be offered a Pre-Placement Offer (PPO) for a full-time role About Zeno Talent: Zeno Talent is a dynamic IT services and consulting company that delivers advanced technology solutions across domains like Data Science, Artificial Intelligence, ERP, and IT Consulting. Our mission is to connect talent with opportunity while solving real-world business problems using cutting-edge technologies. We value innovation, learning, and professional growth. Job Description: We are seeking a passionate and motivated AI Intern (Remote) to join our Artificial Intelligence & Data Science team. You will work on real-time AI/ML projects, gaining hands-on experience and professional mentorship. This internship is ideal for someone looking to launch their career in AI and grow within a supportive, fast-paced environment. Outstanding interns will receive a Pre-Placement Offer (PPO) for a full-time role at Zeno Talent. Key Responsibilities: Assist in building, training, and fine-tuning machine learning models Clean, preprocess, and analyze datasets from real-world applications Support development of AI solutions using Python and relevant libraries Collaborate with mentors and team members to contribute to live projects Document technical work and report progress regularly Research and stay updated on new AI trends and tools Eligibility & Skills: Currently pursuing or recently completed a degree in Computer Science, Data Science, AI, or related field Solid foundation in Python and libraries like NumPy, Pandas, Scikit-learn Basic understanding of machine learning algorithms Familiarity with data visualization tools (e.g., Matplotlib, Seaborn) Strong problem-solving and analytical skills Willingness to learn, adapt, and take initiative in a remote team environment Bonus (Good to Have): Experience with Git and GitHub Exposure to NLP, deep learning, or computer vision Participation in AI projects, competitions, or hackathons What You’ll Gain: Real-world experience working on live AI projects One-on-one mentorship from experienced professionals Letter of Recommendation & Internship Certificate PPO (Pre-Placement Offer) opportunity for top performers Career guidance and resume/project review sessions

Posted 1 month ago

Apply

10.0 - 15.0 years

30 - 35 Lacs

Hyderabad

Work from Office

Define, Design, and Build an optimal data pipeline architecture to collect data from a variety of sources, cleanse, and organize data in SQL & NoSQL destinations (ELT & ETL Processes). Define and Build business use case-specific data models that can be consumed by Data Scientists and Data Analysts to conduct discovery and drive business insights and patterns. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build and deploy analytical models and tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Define, Design, and Build Executive dashboards and reports catalogs to serve decision-making and insight generation needs. Provide inputs to help keep data separated and secure across data centers on-prem and private and public cloud environments. Create data tools for analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Implement scheduled data load process and maintain and manage the data pipelines. Troubleshoot, investigate, and fix failed data pipelines and prepare RCA. Experience with a mix of the following Data Engineering Technologies Python, Spark, Snowflake, Databricks, Hadoop (CDH), Hive, Sqoop, oozie SQL Postgres, MySQL, MS SQL Server Azure ADF, Synapse Analytics, SQL Server, ADLS G2 AWS Redshift, EMR cluster, S3 Experience with a mix of the following Data Analytics and Visualization toolsets SQL, PowerBI, Tableau, Looker, Python, R Python libraries -- Pandas, Scikit-learn, Seaborn, Matplotlib, TF, Stat-Models, PySpark, Spark-SQL, R, SAS, Julia, SPSS, Azure Synapse Analytics, Azure ML studio, Azure Auto ML

Posted 1 month ago

Apply

1.0 - 2.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Define, Design, and Build an optimal data pipeline architecture to collect data from a variety of sources, cleanse, and organize data in SQL & NoSQL destinations (ELT & ETL Processes). Define and Build business use case-specific data models that can be consumed by Data Scientists and Data Analysts to conduct discovery and drive business insights and patterns. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build and deploy analytical models and tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Define, Design, and Build Executive dashboards and reports catalogs to serve decision-making and insight generation needs. Provide inputs to help keep data separated and secure across data centers - on-prem and private and public cloud environments. Create data tools for analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Implement scheduled data load process and maintain and manage the data pipelines. Troubleshoot, investigate, and fix failed data pipelines and prepare RCA. Experience with a mix of the following Data Engineering Technologies Python, Spark, Snowflake, Databricks, Hadoop (CDH), Hive, Sqoop, oozie SQL - Postgres, MySQL, MS SQL Server Azure - ADF, Synapse Analytics, SQL Server, ADLS G2 AWS - Redshift, EMR cluster, S3 Experience with a mix of the following Data Analytics and Visualization toolsets SQL, PowerBI, Tableau, Looker, Python, R Python libraries -- Pandas, Scikit-learn, Seaborn, Matplotlib, TF, Stat-Models, PySpark, Spark-SQL, R, SAS, Julia, SPSS, Azure - Synapse Analytics, Azure ML studio, Azure Auto ML

Posted 1 month ago

Apply

15.0 - 20.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence tools and cloud AI services. Your typical day will involve designing and implementing production-ready solutions, ensuring that they meet quality standards. You will work with various AI models, including generative AI, deep learning, and neural networks, while also exploring innovative applications such as chatbots and image processing. Collaboration with cross-functional teams will be essential to integrate these advanced technologies into effective solutions that address real-world challenges. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve existing processes and systems to optimize performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models.- Good To Have Skills: Experience with cloud-based AI services.- Strong understanding of deep learning frameworks such as TensorFlow or PyTorch.- Familiarity with natural language processing techniques and tools.- Experience in developing and deploying chatbots and conversational agents. Additional Information:- The candidate should have minimum 5 years of experience in Large Language Models.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Associate Data Scientist at IBM, you will work to solve business problems using leading edge and open-source tools such as Python, R, and TensorFlow, combined with IBM tools and our AI application suites. You will prepare, analyze, and understand data to deliver insight, predict emerging trends, and provide recommendations to stakeholders. In Your Role, You May Be Responsible For Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences Preferred Education Master's Degree Required Technical And Professional Expertise Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred Technical And Professional Experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities An AI Data Scientist at IBM is not just a job title - it’s a mindset. You’ll leverage the watsonx,AWS Sagemaker,Azure Open AI platform to co-create AI value with clients, focusing on technology patterns to enhance repeatability and delight clients. We are seeking an experienced and innovative AI Data Scientist to be specialized in foundation models and large language models. In this role, you will be responsible for architecting and delivering AI solutions using cutting-edge technologies, with a strong focus on foundation models and large language models. You will work closely with customers, product managers, and development teams to understand business requirements and design custom AI solutions that address complex challenges. Experience with tools like Github Copilot, Amazon Code Whisperer etc. is desirable. Success is our passion, and your accomplishments will reflect this, driving your career forward, propelling your team to success, and helping our clients to thrive. Day-to-Day Duties Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Documentation and Knowledge Sharing: Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides. Contribute to internal knowledge sharing initiatives and mentor new team members. Industry Trends and Innovation: Stay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Preferred Education Master's Degree Required Technical And Professional Expertise Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms (e.g. Kubernetes, AWS, Azure, GCP) and related services is a plus. Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation leveraging LLM capabilities would be a Big plus (e.g. Amazon Code Whisperer, Github Copilot etc.) * Soft Skills: Excellent interpersonal and communication skills. Engage with stakeholders for analysis and implementation. Commitment to continuous learning and staying updated with advancements in the field of AI. Growth mindset: Demonstrate a growth mindset to understand clients' business processes and challenges. Experience in python and pyspark will be added advantage Preferred Technical And Professional Experience Experience: Proven experience in designing and delivering AI solutions, with a focus on foundation models, large language models, exposure to open source, or similar technologies. Experience in natural language processing (NLP) and text analytics is highly desirable. Understanding of machine learning and deep learning algorithms. Strong track record in scientific publications or open-source communities Experience in full AI project lifecycle, from research and prototyping to deployment in production environments

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities In your role, you may be responsible for: Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences Preferred Education Master's Degree Required Technical And Professional Expertise Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred Technical And Professional Experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred Experience in python and pyspark will be added advantage

Posted 1 month ago

Apply

35.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Us One team. Global challenges. Infinite opportunities. At Viasat, we’re on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We’re looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team. What You'll Do Parse and manipulate raw data leveraging tools including R, Python, Tableau, with a strong preference for Python Ingest, understand, and fully synthesize large amounts of data from multiple sources to build a full comprehension of the story Analyze large data sets, while finding the truth in data, and develop efficient processes for data analysis and simple, elegant visualization Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics Experience building intuitive and actionable dashboards and data visualizations that drive business decisions (Tableau/Power BI/Grafana) The day-to-day Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics What You'll Need 3-4 years SQL experience 3-4 years data analysis experience with emphasis in reporting 3-4 years Python experience in data cleansing, statistics, and data visualization packages (i.e. pandas, scikit-learn, matplotlib, seaborn, plotly, etc.) 6-8 years dashboarding experience. Tableau/Power BI/Grafana experience or equivalent with data visualization tools Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution Able to identify stakeholders, build relationships, and influence others to drive progress Excellent analytical and problem solving skills Strong oral and written communication skills Strong statistical background What Will Help You On The Job Strong preference for personal projects and work in Python Data Visualization experience Data Science experience EEO Statement Viasat is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, ancestry, physical or mental disability, medical condition, marital status, genetics, age, or veteran status or any other applicable legally protected status or characteristic. If you would like to request an accommodation on the basis of disability for completing this on-line application, please click here.

Posted 1 month ago

Apply

35.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us One team. Global challenges. Infinite opportunities. At Viasat, we’re on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We’re looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team. What You'll Do Parse and manipulate raw data leveraging tools including R, Python, Tableau, with a strong preference for Python Ingest, understand, and fully synthesize large amounts of data from multiple sources to build a full comprehension of the story Analyze large data sets, while finding the truth in data, and develop efficient processes for data analysis and simple, elegant visualization Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics Experience building intuitive and actionable dashboards and data visualizations that drive business decisions (Tableau/Power BI/Grafana) The day-to-day Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics What You'll Need 3-4 years SQL experience 3-4 years data analysis experience with emphasis in reporting 3-4 years Python experience in data cleansing, statistics, and data visualization packages (i.e. pandas, scikit-learn, matplotlib, seaborn, plotly, etc.) 6-8 years dashboarding experience. Tableau/Power BI/Grafana experience or equivalent with data visualization tools Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution Able to identify stakeholders, build relationships, and influence others to drive progress Excellent analytical and problem solving skills Strong oral and written communication skills Strong statistical background What Will Help You On The Job Strong preference for personal projects and work in Python Data Visualization experience Data Science experience EEO Statement Viasat is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, ancestry, physical or mental disability, medical condition, marital status, genetics, age, or veteran status or any other applicable legally protected status or characteristic. If you would like to request an accommodation on the basis of disability for completing this on-line application, please click here.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: Develop automation utilities and scripts using Python to streamline workflows and processes. Perform data analysis to extract meaningful insights from structured and unstructured datasets. Create data views and dashboards based on analysis results to support decision-making. Design and implement visualizations using libraries like Matplotlib, Seaborn, or Plotly. Collaborate with cross-functional teams to gather requirements and deliver tailored solutions. Experience with frameworks like Flask or Django for building web-based utilities. Ensure code quality through unit testing, integration testing, and adherence to best practices. Document technical designs, processes, and solutions for future reference. Requirements To be successful in this role, you should meet the following requirements: Proficiency in Python programming with experience in developing scalable utilities and automation scripts. Strong knowledge of data analysis techniques and tools (e.g., Pandas, NumPy). Experience with data visualization libraries (e.g., Matplotlib, Seaborn, Plotly). Knowledge of REST APIs and integration with external systems. Understanding of software development lifecycle (SDLC) and Agile methodologies. Strong verbal and written communication skills. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 1 month ago

Apply

3.0 years

5 - 7 Lacs

Gurgaon

Remote

Job Summary: We are seeking a dynamic RPA & Data Automation Developer with 3+ years of hands-on experience in building automated workflows, data pipelines, and API-based integrations. The role demands strong analytical skills, advanced scripting capabilities in Python, experience with RPA tools like Power Automate, and solid SQL knowledge for backend automation. Key Responsibilities: Design, develop, and maintain RPA solutions using Python, Selenium, and Power Automate. Automate business processes using scripts and bots that interact with Excel, browsers, databases, and APIs. Work extensively with Python libraries including Pandas, NumPy, Matplotlib, re (regex), smtp, and FastAPI. Create and consume RESTful APIs for data services and automation endpoints. Perform complex data analysis and transformation using Pandas and SQL queries. Write and maintain SQL components such as stored procedures, views, functions, and perform schema design and query optimization. Automate data flows across platforms including Excel, emails, and databases using VBA macros and Power Automate flows. Implement exception handling, logging, and monitoring mechanisms for all automation processes. Collaborate with business teams to understand workflows, bottlenecks, and identify automation opportunities. Required Technical Skills: Python & Automation: Pandas, NumPy, Matplotlib Selenium (for browser automation) FastAPI (for microservices/API creation) Regular Expressions (regex) SMTP libraries (email automation) JSON/CSV/XML parsing RPA Tools: Power Automate (desktop flows and cloud flows) VBA (for Excel-based automation) Database: MS SQL Server: T-SQL, stored procedures (sp), functions, views Schema design and performance optimization Preferred Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field Experience working with multiple data sources and formats Familiarity with ClickUp or other versioning and task tracking tools Exposure to cloud platforms (Azure, AWS) is a plus What We Offer: Exposure to advanced RPA & data engineering projects Collaborative, tech-first environment Career growth opportunities across automation and analytics domains Competitive compensation Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹700,000.00 per year Benefits: Flexible schedule Paid sick time Paid time off Provident Fund Work from home Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary Synechron is seeking a detail-oriented Data Analyst to leverage advanced data analysis, visualization, and insights to support our business objectives. The ideal candidate will have a strong background in creating interactive dashboards, performing complex data manipulations using SQL and Python, and automating workflows to drive efficiency. Familiarity with cloud platforms such as AWS is a plus, enabling optimization of data storage and processing solutions. This role will enable data-driven decision-making across teams, contributing to strategic growth and operational excellence. Software Requirements Required: PowerBI (or equivalent visualization tools like Streamlit, Dash) SQL (for data extraction, manipulation, and querying) Python (for scripting, automation, and advanced analysis) Data management tools compatible with cloud platforms (e.g., AWS S3, Redshift, or similar) Preferred: Cloud platform familiarity, especially AWS services related to data storage and processing Knowledge of other visualization platforms (Tableau, Looker) Familiarity with source control systems (e.g., Git) Overall Responsibilities Develop, redesign, and maintain interactive dashboards and visualization tools to provide actionable insights. Perform complex data analysis, transformations, and validation using SQL and Python. Automate data workflows, reporting, and visualizations to streamline processes. Collaborate with business teams to understand data needs and translate them into effective visual and analytical solutions. Support data extraction, cleaning, and validation from various sources, ensuring data accuracy. Maintain and enhance understanding of cloud environments, especially AWS, to optimize data storage, processing pipelines, and scalability. Document technical procedures and contribute to best practices for data management and reporting. Performance Outcomes: Timely, accurate, and insightful dashboards and reports. Increased automation reducing manual effort. Clear communication of insights and data-driven recommendations to stakeholders. Technical Skills (By Category) Programming Languages: Essential: SQL, Python Preferred: R, additional scripting languages Databases/Data Management: Essential: Relational databases (SQL Server, MySQL, Oracle) Preferred: NoSQL databases like MongoDB, cloud data warehouses (AWS Redshift, Snowflake) Cloud Technologies: Essential: Basic understanding of AWS cloud services (S3, EC2, RDS) Preferred: Experience with cloud-native data solutions and deployment Frameworks and Libraries: Python: Pandas, NumPy, Matplotlib, Seaborn, Plotly, Streamlit, Dash Visualization: PowerBI, Tableau (preferred) Development Tools and Methodologies: Version control: Git Automation tools for workflows and reporting Familiarity with Agile methodologies Security Protocols: Awareness of data security best practices and compliance standards in cloud environments Experience Requirements 3-5 years of experience in data analysis, visualization, or related data roles. Proven ability to deliver insightful dashboards, reports, and analysis. Experience working across teams and communicating complex insights clearly. Knowledge of cloud environments like AWS or other cloud providers is desirable. Experience in a business environment, not necessarily as a full-time developer, but as an analytical influencer. Day-to-Day Activities Collaborate with stakeholders to gather requirements and define data visualization strategies. Design and maintain dashboards using PowerBI, Streamlit, Dash, or similar tools. Extract, transform, and analyze data using SQL and Python scripts. Automate recurring workflows and report generation to improve operational efficiencies. Troubleshoot data issues and derive insights to support decision-making. Monitor and optimize cloud data storage and processing pipelines. Present findings to business units, translating technical outputs into actionable recommendations. Qualifications Bachelor’s degree in Computer Science, Data Science, Statistics, or related field. Master’s degree is a plus. Relevant certifications (e.g., PowerBI, AWS Data Analytics) are advantageous. Demonstrated experience with data visualization and scripting tools. Continuous learning mindset to stay updated on new data analysis trends and cloud innovations. Professional Competencies Strong analytical and problem-solving skills. Effective communication, with the ability to explain complex insights clearly. Collaborative team player with stakeholder management skills. Adaptability to rapidly changing data or project environments. Innovative mindset to suggest and implement data-driven solutions. Organized, self-motivated, and capable of managing multiple priorities efficiently. S YNECHRON’S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. Candidate Application Notice

Posted 1 month ago

Apply

2.0 years

3 - 9 Lacs

Chennai

On-site

We are looking for qualified people who can develop scalable solutions to complex real-world problems using AI/ML, Big Data, Statistics, Econometrics, and Optimization. Potential candidates should have excellent depth and breadth of knowledge in machine learning, data mining, and statistical modeling. They should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Minimum Qualifications Bachelor’s degree in Analytics, Computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. 2+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or PowerBI. 1+ years' experience in SQL programming language and relational databases. Preferred Qualifications An MS/PhD in Analytics, Computer Science, Operational research, Statistics, Applied mathematics or in any other engineering discipline. PhD strongly preferred. Experience working with Google Cloud Platform (GCP) services, leveraging its capabilities for ML model development and deployment. Experience with Git and GitHub for version control and collaboration. Besides Python, familiarity with one more additional programming language (e.g., C/C++/Java) Strong background and understanding of mathematical concepts relating to probabilistic models, conditional probability, numerical methods, linear algebra, neural network under the hood detail. Experience working with large language models such GPT-4, Gemini, Palm, Llama-2, etc. Excellent problem solving, communication, and data presentation skills Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making Design and implement data analysis and AI/ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness

Posted 1 month ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description We are looking for qualified people who can develop scalable solutions to complex real-world problems using AI/ML, Big Data, Statistics, Econometrics, and Optimization. Potential candidates should have excellent depth and breadth of knowledge in machine learning, data mining, and statistical modeling. They should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Responsibilities Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making Design and implement data analysis and AI/ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness Qualifications Minimum Qualifications Bachelor’s degree in Analytics, Computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. 2+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or PowerBI. 1+ years' experience in SQL programming language and relational databases. Preferred Qualifications An MS/PhD in Analytics, Computer Science, Operational research, Statistics, Applied mathematics or in any other engineering discipline. PhD strongly preferred. Experience working with Google Cloud Platform (GCP) services, leveraging its capabilities for ML model development and deployment. Experience with Git and GitHub for version control and collaboration. Besides Python, familiarity with one more additional programming language (e.g., C/C++/Java) Strong background and understanding of mathematical concepts relating to probabilistic models, conditional probability, numerical methods, linear algebra, neural network under the hood detail. Experience working with large language models such GPT-4, Gemini, Palm, Llama-2, etc. Excellent problem solving, communication, and data presentation skills

Posted 1 month ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

What We Offer 6 Months Internship followed with job opportunity Hands-on training and mentorship from industry experts. Opportunity to work on live projects and client interactions. A vibrant and learning-driven work culture. 5 days a week & Flexible work timings Job Summary We are seeking motivated and enthusiastic AI & ML Interns or Freshers to support our AI and machine learning initiatives. As an intern or fresher, you will have the opportunity to work alongside our experienced data scientists and engineers, gaining hands-on experience in developing, implementing, and optimizing AI and ML models. Key Responsibilities Assist in the development and testing of machine learning models and algorithms. Conduct data preprocessing and data cleaning tasks to prepare datasets for analysis. Implement, evaluate, and optimize AI models using state-of-the-art techniques. Collaborate with team members to integrate machine learning models into applications and systems. Participate in the research and development of new AI and ML technologies. Perform data analysis and visualization to support decision-making processes. Document processes, methodologies, and findings in a clear and concise manner. Stay updated with the latest trends and advancements in AI and machine learning. Requirements Recent graduates from the BE/B.Tech background of the 2024/25 academic year. Basic understanding of machine learning concepts and algorithms. Understanding of programming languages such as Python, R, or similar. Familiarity with machine learning frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, etc. Familiarity with data analysis and visualization tools like Pandas, NumPy, Matplotlib, or similar. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Self-motivated and eager to learn new technologies and techniques.

Posted 1 month ago

Apply

1.0 years

1 Lacs

India

On-site

Our Culture & Values: We’d describe our culture as human, friendly, engaging, supportive, agile, and super collaborative. At Kainskep Solutions, our five values underpin everything we do, from how we work to how we delight and deliver to our customers. Our values are #TeamMember, #Ownership, #Innovation, #Challenge, and #Collaboration. What makes a great team? A Diverse Team! Don’t be put off if you don’t tick all the boxes; we know from research that candidates may not apply if they don’t feel they are 100% there yet; the essential experience we need is the ability to engage clients and build strong, effective relationships. If you don’t tick the rest, we would still love to talk. We’re committed to creating a diverse and inclusive. What you’ll bring: Use programming languages like Python, R, and SQL for data manipulation, statistical analysis, and machine learning tasks. Apply fundamental statistical concepts such as mean, median, variance, probability distributions, and hypothesis testing to analyze data. Develop supervised and unsupervised machine learning models, including classification, regression, clustering, and dimensionality reduction techniques. Evaluate model performance using metrics such as accuracy, precision, recall, and F1-score, implementing cross-validation techniques to ensure reliability. Conduct data manipulation and visualization using libraries such as Pandas, Matplotlib, Seaborn, and ggplot2, implementing data cleaning techniques to handle missing values and outliers. Perform exploratory data analysis, feature engineering, and data mining tasks including text mining, natural language processing (NLP), and web scraping. Familiarize yourself with big data technologies such as Apache Spark and Hadoop, understanding distributed computing concepts to handle large-scale datasets effectively. Manage relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra) for data storage and retrieval. Use version control systems like Git and GitHub/GitLab for collaborative development, understanding branching, merging, and versioning workflows. Demonstrate basic knowledge of the software development lifecycle, Agile methodologies, algorithms, and data structures. Requirements: Bachelor’s degree or higher in Computer Science, Statistics, Mathematics, or a related field. Proficiency in programming languages such as Python, R, and SQL. Strong analytical skills and a passion for working with data. Ability to learn quickly and adapt to new technologies and methodologies. Prior experience with data analysis, machine learning, or related fields is a plus. Good To Have: Experience in Computer Vision, including Image Processing and Video Processing. Familiarity with Generative AI techniques, such as Generative Adversarial Networks (GANs), and their applications in image, text, and other data generation tasks. Knowledge of Large Language Models (LLMs) is a plus. Experience with Microsoft AI technologies, including Azure AI Studio and Azure Copilot Studio. Job Type: Fresher Pay: ₹10,000.00 per month Benefits: Flexible schedule Schedule: Monday to Friday Ability to commute/relocate: Vaishali Nagar, Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data science: 1 year (Required) Work Location: In person Expected Start Date: 14/07/2025

Posted 1 month ago

Apply

3.0 years

3 - 5 Lacs

Chhattisgarh, India

On-site

Python Instructor About The Opportunity We are a fast-growing IT training and upskilling provider operating in the Professional Education & Technology Services sector. Specializing in enterprise software development curricula, we equip fresh graduates and working engineers with production-ready skills demanded by global clients. Our classrooms, labs, and project-based programs are designed to bridge the gap between academic knowledge and real-world software engineering standards. Role & Responsibilities Deliver immersive, hands-on Python sessions to entry-level and mid-career engineers in an on-site classroom setting. Design, update, and version-control courseware covering core Python, OOP, data structures, web frameworks, and testing. Lead live coding demos, code reviews, and hackathon-style projects that mimic industry SDLC workflows. Evaluate learner progress through quizzes, pair-programming assessments, and capstone project grading. Collaborate with placement and industry liaison teams to align content with current hiring requirements. Mentor trainees on best practices in git, Agile, and problem-solving to raise job-readiness metrics. Skills & Qualifications Must-Have 3+ years professional Python development in production environments. Solid understanding of OOP, data structures, algorithms, and design patterns. Experience with at least one Python web framework (Django or Flask). Prior classroom, bootcamp, or corporate training facilitation experience. Excellent verbal communication and live coding fluency. Proficiency with git, unit testing, and virtual environments. Preferred Exposure to data science libraries (Pandas, NumPy, Matplotlib). Knowledge of cloud deployment on AWS or Azure. Hands-on with CI/CD tools and Docker. Certification in training or instructional design. Experience tailoring content for BFSI or Telecom domains. Contributions to open-source Python projects. Benefits & Culture Highlights Cutting-edge lab infrastructure and dedicated TA support. Clearly defined trainer career ladder with sponsored certifications. Collaborative, learner-centric ethos that rewards innovation in pedagogy. Location: On-site, India. Full-time. Skills: data structures,azure,numpy,oop,ci/cd,public speaking,curriculum design,django,unit testing,git,aws,docker,python,matplotlib,assessment design,flask,virtual environments,pandas,design patterns,algorithms

Posted 1 month ago

Apply

3.0 years

3 - 5 Lacs

Raipur, Chhattisgarh, India

On-site

Python Instructor About The Opportunity We are a fast-growing IT training and upskilling provider operating in the Professional Education & Technology Services sector. Specializing in enterprise software development curricula, we equip fresh graduates and working engineers with production-ready skills demanded by global clients. Our classrooms, labs, and project-based programs are designed to bridge the gap between academic knowledge and real-world software engineering standards. Role & Responsibilities Deliver immersive, hands-on Python sessions to entry-level and mid-career engineers in an on-site classroom setting. Design, update, and version-control courseware covering core Python, OOP, data structures, web frameworks, and testing. Lead live coding demos, code reviews, and hackathon-style projects that mimic industry SDLC workflows. Evaluate learner progress through quizzes, pair-programming assessments, and capstone project grading. Collaborate with placement and industry liaison teams to align content with current hiring requirements. Mentor trainees on best practices in git, Agile, and problem-solving to raise job-readiness metrics. Skills & Qualifications Must-Have 3+ years professional Python development in production environments. Solid understanding of OOP, data structures, algorithms, and design patterns. Experience with at least one Python web framework (Django or Flask). Prior classroom, bootcamp, or corporate training facilitation experience. Excellent verbal communication and live coding fluency. Proficiency with git, unit testing, and virtual environments. Preferred Exposure to data science libraries (Pandas, NumPy, Matplotlib). Knowledge of cloud deployment on AWS or Azure. Hands-on with CI/CD tools and Docker. Certification in training or instructional design. Experience tailoring content for BFSI or Telecom domains. Contributions to open-source Python projects. Benefits & Culture Highlights Cutting-edge lab infrastructure and dedicated TA support. Clearly defined trainer career ladder with sponsored certifications. Collaborative, learner-centric ethos that rewards innovation in pedagogy. Location: On-site, India. Full-time. Skills: data structures,azure,numpy,oop,ci/cd,public speaking,curriculum design,django,unit testing,git,aws,docker,python,matplotlib,assessment design,flask,virtual environments,pandas,design patterns,algorithms

Posted 1 month ago

Apply

0 years

12 - 18 Lacs

Pune, Maharashtra, India

On-site

Role Definition: Data Scientists focus on researching and developing AI algorithms and models. They analyse data, build predictive models, and apply machine learning techniques to solve complex problems Skills Proficient: Languages/Framework: Fast API, Azure UI Search API (React) Databases and ETL: Cosmos DB (API for MongoDB), Data Factory Data Bricks Proficiency in Python and R Cloud: Azure Cloud Basics (Azure DevOps) Gitlab: Gitlab Pipeline Ansible and REX: Rex Deployment Data Science: Prompt Engineering + Modern Testing Data mining and cleaning ML (Supervised/unsupervised learning) NLP techniques, knowledge of Deep Learning techniques include RNN, transformers End-to-end AI solution delivery AI integration and deployment AI frameworks (PyTorch) MLOps frameworks Model deployment processes Data pipeline monitoring Expert: (in addition to proficient skills) Languages/Framework: Azure Open AI Data Science: Open AI GPT Family of models 4o/4/3, Embeddings + Vector Search Databases and ETL: Azure Storage Account Expertise in machine learning algorithms (supervised, unsupervised, reinforcement learning) Proficiency in deep learning frameworks (TensorFlow, PyTorch) Strong mathematical foundation (linear algebra, calculus, probability, statistics) Research methodology and experimental design Proficiency in data analysis tools (Pandas, NumPy, SQL) Strong statistical and probabilistic modelling skills Data visualization skills (Matplotlib, Seaborn, Tableau) Knowledge of big data technologies (Spark, Hive) Experience with AI-driven analytics and decision-making systems Skills: statistical modelling,data scientist,fastapi,nlp techniques,azure cloud basics,cosmos db,azure,ansible,sql,cosmos db (api for mongodb),unsupervised learning,gitlab pipeline,azure ui search api (react),reinforcement learning,modern testing,python,rex deployment,end-to-end ai solution delivery,ai-driven analytics,azure storage account,azure devops,open ai gpt family of models,transformers,data factory data bricks,numpy,data bricks,azure open ai,prompt engineering,etl,data pipeline monitoring,fast api,data factory,model deployment processes,tensorflow,data visualization (matplotlib, seaborn, tableau),pandas,ai integration and deployment,big data technologies (spark, hive),supervised learning,r,rnn,mlops frameworks,data mining and cleaning,pytorch,deep learning techniques

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies