Jobs
Interviews

1611 Matplotlib Jobs - Page 48

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Role: Senior Analyst - Data Engineering Experience: 3 to 6 years Location: Bengaluru, Karnataka , India (BLR) Job Descrition: We are seeking a highly experienced and skilled Senior Data Engineer to join our dynamic team. This role requires hands-on experience with databases such as Snowflake and Teradata, as well as advanced knowledge in various data science and AI techniques. The successful candidate will play a pivotal role in driving data-driven decision-making and innovation within our organization. Job Resonsbilities: Design, develop, and implement advanced machine learning models to solve complex business problems. Apply AI techniques and generative AI models to enhance data analysis and predictive capabilities. Utilize Tableau and other visualization tools to create insightful and actionable dashboards for stakeholders. Manage and optimize large datasets using Snowflake and Teradata databases. Collaborate with cross-functional teams to understand business needs and translate them into analytical solutions. Stay updated with the latest advancements in data science, machine learning, and AI technologies. Mentor and guide junior data scientists, fostering a culture of continuous learning and development. Communicate complex analytical concepts and results to non-technical stakeholders effectively. Key Technologies & Skills: Machine Learning Models: Supervised learning, unsupervised learning, reinforcement learning, deep learning, neural networks, decision trees, random forests, support vector machines (SVM), clustering algorithms, etc. AI Techniques: Natural language processing (NLP), computer vision, generative adversarial networks (GANs), transfer learning, etc. Visualization Tools: Tableau, Power BI, Matplotlib, Seaborn, Plotly, etc. Databases: Snowflake, Teradata, SQL, NoSQL databases. Programming Languages: Python (essential), R, SQL. Python Libraries: TensorFlow, PyTorch, scikit-learn, pandas, NumPy, Keras, SciPy, etc. Data Processing: ETL processes, data warehousing, data lakes. Cloud Platforms: AWS, Azure, Google Cloud Platform. Big Data Technologies: Apache Spark, Hadoop. Job Snapshot Updated Date 11-06-2025 Job ID J_3679 Location Bengaluru, Karnataka, India Experience 3 - 6 Years Employee Type Permanent

Posted 2 months ago

Apply

4.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Salary - 10 to 25 LPA Title : Sr. Data Scientist/ML Engineer (4+ years & above) Required Technical Skillset Language : Python, PySpark Framework : Scikit-learn, TensorFlow, Keras, PyTorch Libraries : NumPy, Pandas, Matplotlib, SciPy, Scikit-learn - DataFrame, Numpy, boto3 Database : Relational Database(Postgres), NoSQL Database (MongoDB) Cloud : AWS cloud platforms Other Tools : Jenkins, Bitbucket, JIRA, Confluence A machine learning engineer is responsible for designing, implementing, and maintaining machine learning systems and algorithms that allow computers to learn from and make predictions or decisions based on data. The role typically involves working with data scientists and software engineers to build and deploy machine learning models in a variety of applications such as natural language processing, computer vision, and recommendation systems. The key responsibilities of a machine learning engineer includes : Collecting and preprocessing large volumes of data, cleaning it up, and transforming it into a format that can be used by machine learning models. Model building which includes Designing and building machine learning models and algorithms using techniques such as supervised and unsupervised learning, deep learning, and reinforcement learning. Evaluating the model performance of machine learning models using metrics such as accuracy, precision, recall, and F1 score. Deploying machine learning models in production environments and integrating them into existing systems using CI/CD Pipelines, AWS Sagemaker Monitoring the performance of machine learning models and making adjustments as needed to improve their accuracy and efficiency. Working closely with software engineers, product managers and other stakeholders to ensure that machine learning models meet business requirements and deliver value to the organization. Requirements And Skills Mathematics and Statistics : A strong foundation in mathematics and statistics is essential. They need to be familiar with linear algebra, calculus, probability, and statistics to understand the underlying principles of machine learning algorithms. Programming Skills : Should be proficient in programming languages such as Python. The candidate should be able to write efficient, scalable, and maintainable code to develop machine learning models and algorithms. Machine Learning Techniques : Should have a deep understanding of various machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning and should also be familiar with different types of models such as decision trees, random forests, neural networks, and deep learning. Data Analysis and Visualization : Should be able to analyze and manipulate large data sets. The candidate should be familiar with data cleaning, transformation, and visualization techniques to identify patterns and insights in the data. Deep Learning Frameworks : Should be familiar with deep learning frameworks such as TensorFlow, PyTorch, and Keras and should be able to build and train deep neural networks for various applications. Big Data Technologies : A machine learning engineer should have experience working with big data technologies such as Hadoop, Spark, and NoSQL databases. They should be familiar with distributed computing and parallel processing to handle large data sets. Software Engineering : A machine learning engineer should have a good understanding of software engineering principles such as version control, testing, and debugging. They should be able to work with software development tools such as Git, Jenkins, and Docker. Communication and Collaboration : A machine learning engineer should have good communication and collaboration skills to work effectively with cross-functional teams such as data scientists, software developers, and business stakeholders. (ref:hirist.tech) Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Company Description UsefulBI Corporation provides comprehensive solutions across Data Engineering, Data Science, AI/ML, and Business Intelligence. The company's mission is to empower astute business decisions through integrating data insights and cutting-edge AI. UsefulBI excels in data architecture, cloud strategies, Business Intelligence, and Generative AI to deliver outcomes that surpass individual capabilities. Role Description We are seeking a skilled R and Python Developer with hands-on experience developing and deploying applications using Posit (formerly RStudio) tools, including Shiny Server, Posit Connect, and R Markdown. The ideal candidate will have a strong background in data analysis, application development, and creating interactive dashboards for data-driven decision-making. Key Responsibilities Design, develop, and deploy interactive web applications using R Shiny and Posit Connect. Write clean, efficient, and modular code in R and Python for data processing and analysis. Build and maintain R Markdown reports and Python notebooks for business reporting. Integrate R and Python scripts for advanced analytics and automation workflows. Collaborate with data scientists, analysts, and business users to gather requirements and deliver scalable solutions. Troubleshoot application issues and optimize performance on Posit platform (RStudio Server, Posit Connect). Work with APIs, databases (SQL, NoSQL), and cloud platforms (e.g., AWS, Azure) as part of application development. Ensure version control using Git and CI/CD for application deployment. Required Qualifications 4+ years of development experience using R and Python. Strong experience with Shiny apps, R Markdown, and Posit Connect. Proficient in using packages like dplyr, ggplot2, plotly, reticulate, and shiny. Experience with Python data stack (pandas, numpy, matplotlib, etc.) Hands-on experience with deploying apps on Posit Server / Connect. Familiarity with Git, Docker, and CI/CD tools. Excellent problem-solving and communication skills. (ref:hirist.tech) Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

India

Remote

Step into the world of AI innovation with the Deccan AI Experts Community (By Soul AI), where you become a creator, not just a consumer. We are reaching out to the top 1% of Soul AI’s Data Visualization Engineers like YOU for a unique job opportunity to work with the industry leaders. What’s in it for you? pay above market standards The role is going to be contract based with project timelines from 2 - 6 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote Onsite on client location: US, UAE, UK, India etc. Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Architect and implement enterprise-level BI solutions to support strategic decision-making along with data democratization by enabling self-service analytics for non-technical users. Lead data governance and data quality initiatives to ensure consistency and design data pipelines and automated reporting solutions using SQL and Python. Optimize big data queries and analytics workloads for cost efficiency and Implement real-time analytics dashboards and interactive reports. Mentor junior analysts and establish best practices for data visualization. Required Skills: Advanced SQL, Python (Pandas, NumPy), and BI tools (Tableau, Power BI, Looker). Expertise in AWS (Athena, Redshift), GCP (Big Query), or Snowflake. Experience with data governance, lineage tracking, and big data tools (Spark, Kafka). Exposure to machine learning and AI-powered analytics. Nice to Have: Experience with graph analytics, geospatial data, and visualization libraries (D3.js, Plotly). Hands-on experience with BI automation and AI-driven analytics. Who can be a part of the community? We are looking for top-tier Data Visualization Engineers with expertise in analyzing and visualizing complex datasets. Proficiency in SQL, Tableau, Power BI, and Python (Pandas, NumPy, Matplotlib) is a plus. If you have experience in this field then this is your chance to collaborate with industry leaders. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching: Be patient while we align your skills and preferences with the available project. 5 . Project Allocation: You’ll be deployed on your preferred project! Skip the Noise. Focus on Opportunities Built for You! Show more Show less

Posted 2 months ago

Apply

5.0 - 7.0 years

5 - 9 Lacs

Chennai

Work from Office

Python Software Development Sr.specialist In these roles, you will be responsible for Design, implement, and test generative AI models using python and various frameworks such as Pandas, TensorFlow, PyTorch, and OpenAI. Research and explore new techniques and applications of generative AI, such as text, image, audio, and video synthesis, style transfer, data augmentation, and anomaly detection. Collaborate with other developers, researchers, and stakeholders to deliver high-quality and innovative solutions. Document and communicate the results and challenges of generative AI projects. Required Skills for this role include: Technical skills 5 to 7 years"™ Experience in developing Python frameworks such DL, ML, Flask At least 2 years of experience in developing generative AI models using python and relevant frameworks. Strong knowledge of machine learning, deep learning, and generative AI concepts and algorithms. Proficient in python and common libraries such as numpy, pandas, matplotlib, and scikit-learn. Familiar with version control, testing, debugging, and deployment tools. Excellent communication and problem-solving skills. Curious and eager to learn new technologies and domains. Desired Skills: Knowledge of Django, Web API Proficient exposure on MVC. Preferences: Graduate degree in Computer Science with 4 years of Python based development. Gen AI Framework Professional Certification

Posted 2 months ago

Apply

2.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Job Details: We are seeking a highly motivated and enthusiastic Junior Data Scientist with 2-3 years of experience to join our data science team. This role offers an exciting opportunity to contribute to both traditional Machine Learning projects for our commercial IoT platform (EDGE Live) and cutting-edge Generative AI initiatives. Position: Data Scientist Division & Department: Enabling Functions_Business Technology Group (BTG) Reporting To: Customer & Commercial Experience Products Leader Educational Qualifications Bachelor's degree in Mechanical, Computer Science, Data Science, Mathematics, or a related field. Experience 2-3 years of hands-on experience with machine learning Exposure to Generative AI concepts and techniques, such as Large Language Models (LLMs), RAG Architecture Experience in manufacturing and with an IoT platform is preferable Role And Responsibilities Key Objectives Machine Learning (ML) Assist in the development and implementation of machine learning models using frameworks such as TensorFlow, PyTorch, or scikit-learn. Help with Python development to integrate models with the overall application Monitor and evaluate model performance using appropriate metrics and techniques. Generative AI Build Gen AI-based tools for various business use cases by fine-tuning and adapting pre-trained generative models Support the exploration and experimentation with Generative AI models Research & Learning Stay up-to-date with the latest advancements and help with POCs Proactively research and propose new techniques and tools to improve our data science capabilities. Collaboration And Communication Work closely with cross-functional teams, including product managers, engineers, and business stakeholders, to understand requirements and deliver impactful solutions. Communicate findings, model performance, and technical concepts to both technical and non-technical audiences. Technical Competencies Programming: Proficiency in Python, with experience in libraries like numpy, pandas, and matplotlib for data manipulation and visualization. ML Frameworks: Experience with TensorFlow, PyTorch, or scikit-learn. Cloud & Deployment: Basic understanding of cloud platforms such as Databricks, Google Cloud Platform (GCP), or Microsoft Azure for model deployment. Data Processing & Evaluation: Knowledge of data preprocessing, feature engineering, and evaluation metrics such as accuracy, F1-score, and RMSE. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Praxair India Private Limited | Business Area: Digitalisation Data Scientist for AI Products (Global) Bangalore, Karnataka, India | Working Scheme: On-Site | Job Type: Regular / Permanent / Unlimited / FTE | Reference Code: req23348 It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Zuddl is a modular platform for events and webinars that helps event marketers plan and execute events that drive growth. Event teams from global organizations like Microsoft, Google, ServiceNow, Zylo, Postman, TransPerfect and the United Nations trust Zuddl. Our modular approach to event management lets B2B marketers and conferences organizers decide which components they need to build the perfect event and scale their event program. Zuddl is an outcome-oriented platform with a focus on flexibility, and is more partner, less vendor.. FUNDING Zuddl being a part Y-Combinator 2020 batch has raised $13.35 million in Series A funding led by Alpha Wave Incubation and Qualcomm Ventures with participation from our existing investors GrowX ventures and Waveform Ventures. What You'll Do Prototype LLM-powered features using frameworks like LangChain, OpenAI Agents SDK to power content automation and intelligent workflows. Build and optimize Retrieval‑Augmented Generation (RAG) systems: document ingestion, chunking, embedding with vector DBs, and LLM integration. Work with vector databases to implement similarity search for use cases like intelligent Q&A, content recommendation, and context-aware responses. Experiment with prompt engineering and fine-tuning techniques Deploy LLM-based microservices and agents using Docker, K8s and CI/CD best practices. Analyze model metrics, document findings, and suggest improvements based on quantitative evaluations. Collaborate across functions—including product, design, and engineering—to align AI features with business needs and enhance user impact. Requirement Strong Python programming skills. Hands-on with LLMs—experience building, fine-tuning, or applying large language models. Familiarity with agentic AI frameworks, such as LangChain or OpenAI Agents SDK (or any relevant tool). Understanding of RAG architectures and prior implementation in projects or prototypes. Experience with vector databases like FAISS, Opensearch etc. Portfolio of LLM-based projects, demonstrated via GitHub, notebooks, or other coding samples. Good to Have Capability to build full‑stack web applications. Data analytics skills—data manipulation (Pandas/SQL), visualization (Matplotlib/Seaborn/Tableau), and statistical analysis. Worked with PostgreSQL, Metabase or relevant tools/databases. Strong ML fundamentals: regression, classification, clustering, deep learning techniques. Experience building recommender systems or hybrid ML solutions. Experience with deep learning frameworks: PyTorch, TensorFlow (or any relevant tool). Exposure to MLOps/DevOps tooling: Docker, Kubernetes, MLflow, Kubeflow (or any relevant tool). Why You Want To Work Here Opportunity to convert to a Full-Time Role, based on performance and organisational requirements after the end of the internship tenure. A culture built on trust, transparency, and integrity Ground floor opportunity at a fast-growing series A startup Competitive Stipend Work on AI-first features in an event-tech startup with global customers Thrive in a remote-first, empowering culture fueled by ownership and trust Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Dear Associates Greetings from TATA Consultancy Services!! Thank you for expressing your interest in exploring a career possibility with the TCS Family Hiring For : Python AI ML, MlOPs Must Have : Spark, Hadoop,PyTorch, TensorFlow,Matplotlib, Seaborn, Tableau, Power BI,scikit-learn, TensorFlow, XGBoost,AWS,Azure , AWS, Databricks,Pyspark, Python,SQL, Snowflake, Experience: 5+ yrs Location : Mumbai / Pune If interested kindly fill the details and send your resume at nitu.sadhukhan@tcs.com . Note: only Eligible candidates with Relevant experience will be contacted further Name Contact No: Email id: Current Location: Preferred Location: Highest Qualification (Part time / Correspondence is not Eligible) : Year of Passing (Highest Qualification): Total Experience: Relevant Experience : Current Organization: Notice Period: Current CTC: Expected CTC: Pan Number : Gap in years if any (Education / Career): Updated CV attached (Yes / No) ? IF attended any interview with TCS in last 6 months : Available For walk In drive on 14th June _Pune : Thanks & Regards, Nitu Sadhukhan Talent Acquisition Group Tata Consultancy Services Lets Connect : linkedin.com/in/nitu-sadhukhan-16a580179 Nitu.sadhukhan@tcs.com Show more Show less

Posted 2 months ago

Apply

0 years

0 - 0 Lacs

Cochin

On-site

We are seeking a dynamic and experienced AI Trainer with expertise in Machine Learning, Deep Learning, and Generative AI including LLMs (Large Language Models) . The candidate will train students and professionals in real-world applications of AI/ML as well as the latest trends in GenAI such as ChatGPT, LangChain, Hugging Face Transformers, Prompt Engineering, and RAG (Retrieval-Augmented Generation) . Key Responsibilities: Deliver hands-on training sessions in AI, ML, Deep Learning , and Generative AI . Teach the fundamentals and implementation of algorithms like regression, classification, clustering, decision trees, neural networks, CNNs, and RNNs. Train students in LLMs (e.g., OpenAI GPT, Meta LLaMA, Google Gemini) and prompt engineering techniques . LangChain Hugging Face Transformers LLM APIs (OpenAI, Cohere, Anthropic, Google Vertex AI) Vector databases (FAISS, Pinecone, Weaviate) RAG pipelines Design and evaluate practical labs and capstone projects (e.g., chatbot, image generator, smart assistants). Keep training materials updated with latest industry developments and tools. Provide mentorship for student projects and support during hackathons or workshops. Required Skills: AI/ML Core: Python, NumPy, pandas, scikit-learn, Matplotlib, Jupyter Good knowledge in Machine Learning and Deep Learning algorithms Deep Learning: TensorFlow / Keras / PyTorch OpenCV (for Computer Vision), NLTK/spaCy (for NLP) Generative AI & LLM: Prompt engineering (zero-shot, few-shot, chain-of-thought) LangChain and LlamaIndex (RAG frameworks) Hugging Face Transformers OpenAI API, Cohere, Anthropic, Google Gemini, etc. Vector DBs like FAISS, ChromaDB, Pinecone, Weaviate Streamlit, Gradio (for app prototyping) Qualifications: B.E/B.Tech/M.Tech/M.Sc in AI, Data Science, Computer Science, or related Practical experience in AI/ML, LLMs, or GenAI projects Previous experience as a developer/trainer/corporate instructor is a plus Salary / Remuneration: ₹30,000 – ₹75,000/month based on experience and engagement type Job Type: Full-time Pay: ₹30,000.00 - ₹75,000.00 per month Schedule: Day shift Application Question(s): How many years of experience you have ? Can you commute to Kakkanad, Kochi ? What is your expected Salary ? Work Location: In person

Posted 2 months ago

Apply

10.0 years

0 Lacs

Chennai

On-site

As a Principal AI Engineer, he will be part of a high performing team working on exciting opportunities in AI within Ford Credit. We are looking for a highly skilled, technical, hands-on AI engineer with a solid background in building end-to-end AI applications, exhibiting a strong aptitude for learning and keeping up with the latest advances in AIDevelop Machine Learning (Supervised/Unsupervised learning), Neural Networks (ANN, CNN, RNN, LSTM, Decision tree, Encoder, Decoder), Natural Language Processing, Generative AI (LLMs, Lang Chain, RAG, Vector Database) . He should be able to lead technical discussion and technical mentor for the team. Professional Experience: Potential candidates should possess 10+ years of strong working experience in AI. BE/MSc/ MTech /ME/PhD (Computer Science/Maths, Statistics). Possess a strong analytical mindset and be very comfortable with data. Experience with handling both relational and non-relational data. Hands-on experience with analytics methods (descriptive/predictive/prescriptive), Statistical Analysis, Probability and Data Visualization tools (Python-Matplotlib, Seaborn). Background of Software engineering with excellent Data Science working experience. Technical Experience: Develop Machine Learning (Supervised/Unsupervised learning), Neural Networks (ANN, CNN, RNN, LSTM, Decision tree, Encoder, Decoder), Natural Language Processing, Generative AI (LLMs, Lang Chain, RAG, Vector Database) . Excellent in communication and presentation skills. Ability to do stakeholder management. Ability to collaborate with a cross-functional team involving data engineers, solution architects, application engineers, and product teams across time zones to develop data and model pipelines. Ability to drive and mentor the team technically, leveraging cutting edge AI and Machine Learning principles and develop production-ready AI solutions. Mentor the team of data scientists and assume responsible for the delivery of use cases. Ability to scope the problem statement, data preparation, training and making the AI model production ready. Work with business partners to understand the problem statement, translate the same into analytical problem. Ability to manipulate structured and unstructured data. Develop, test and improve existing machine learning models. Analyse large and complex data sets to derive valuable insights. Research and implement best practices to enhance existing machine learning infrastructure. Develop prototypes for future exploration. Design and evaluate approaches for handling large volume of real data streams. Ability to determine appropriate analytical methods to be used. Understanding of statistics and hypothesis testing.

Posted 2 months ago

Apply

8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

16.0 years

1 - 6 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: WHAT Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment HOW Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 2 months ago

Apply

2.0 years

0 Lacs

Surat, Gujarat, India

On-site

We’re hiring a Python Developer with a strong understanding of Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying scalable AI/ML solutions and Python-based backend applications. Note: Only Surat-Gujarat based candidate apply for this job. Role Expectations : Develop and maintain robust Python code for backend and AI/ML applications. Design and implement machine learning models for prediction, classification, recommendation, etc. Work on data preprocessing, feature engineering, model training, evaluation, and optimization. Collaborate with the frontend team, data scientists, and DevOps engineers to deploy ML models to production. Integrate AI/ML models into web or mobile applications. Write clean, efficient, and well-documented code. Stay updated with the latest trends and advancements in Python and ML. Soft skills : Problem-Solving Analytical Thinking Collaboration & Teamwork Time Management Attention to Detail Required Skills: Proficiency in Python and Python-based libraries (NumPy, Pandas, Scikit-learn, etc.). Hands-on experience with AI/ML model development and deployment. Familiarity with TensorFlow, Keras, or PyTorch. Strong knowledge of data structures, algorithms, and object-oriented programming. Experience with REST APIs and Flask/Django frameworks. Basic knowledge of data visualization tools like Matplotlib or Seaborn. Understanding of version control tools (Git/GitHub). Good to Have: Experience with cloud platforms (AWS, GCP, or Azure). Knowledge of NLP, computer vision, or deep learning models. Experience working with large datasets and databases (SQL, MongoDB). Familiarity with containerization tools like Docker. Our Story : We’re chasing a world where tech doesn’t frustrate—it flows like a river carving its own path. Every line of code we hammer out is a brick in a future where tools don’t just function—they vanish into the background, so intuitive you barely notice them working their magic. We craft software and apps that tackle real problems head-on, not just pile up shiny features for the sake of a spec sheet. It starts with listening—really listening—to the headaches, the what-ifs, and the crazy ambitions others might shrug off. Then we build smart: solutions that cut through the clutter with surgical precision, designed to fit like a glove and run like a rocket. Unlock The Advantage : 5-Days a week 12 paid leave + public holidays Training and Development : Certifications Employee engagement activities: awards, community gathering Good Infrastructure & Onsite opportunity Flexible working culture Experience: 2-3 Years Job Type: Full Time (On-site) Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

What You'll be doing: Dashboard Development: Design, develop, and maintain interactive and visually compelling dashboards using Power BI. Implement DAX queries and data models to support business intelligence needs. Optimize performance and usability of dashboards for various stakeholders. Python & Streamlit Applications: Build and deploy lightweight data applications using Streamlit for internal and external users. Integrate Python libraries (e.g., Pandas, NumPy, Plotly, Matplotlib) for data processing and visualization. Data Integration & Retrieval: Connect to and retrieve data from RESTful APIs, cloud storage (e.g., Azure Data Lake, Cognite Data Fusion, and SQL/NoSQL databases. Automate data ingestion pipelines and ensure data quality and consistency. Collaboration & Reporting: Work closely with business analysts, data engineers, and stakeholders to gather requirements and deliver insights. Present findings and recommendations through reports, dashboards, and presentations. Requirements: Bachelor’s or master’s degree in computer science, Data Science, Information Systems, or a related field. 3+ years of experience in data analytics or business intelligence roles. Proficiency in Power BI, including DAX, Power Query, and data modeling. Strong Python programming skills, especially with Streamlit, Pandas, and API integration. Experience with REST APIs, JSON/XML parsing, and cloud data platforms (Azure, AWS, or GCP). Familiarity with version control systems like Git. Excellent problem-solving, communication, and analytical skills. Preferred Qualifications: Experience with CI/CD pipelines for data applications. Knowledge of DevOps practices and containerization (Docker). Exposure to machine learning or statistical modeling is a plus. Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us: Athena is India's largest institution in the "premium undergraduate study abroad" space. Founded 10 years ago by two Princeton graduates, Poshak Agrawal and Rahul Subramaniam, Athena is headquartered in Gurgaon, with offices in Mumbai and Bangalore, and caters to students from 26 countries. Athena’s vision is to help students become the best version of themselves. Athena’s transformative, holistic life coaching program embraces both depth and breadth, sciences and the humanities. Athena encourages students to deepen their theoretical knowledge and apply it to address practical issues confronting society, both locally and globally. Through our flagship program, our students have gotten into various, universities including Harvard University, Princeton University, Yale University, Stanford University, University of Cambridge, MIT, Brown, Cornell University, University of Pennsylvania, University of Chicago, among others. Learn more about Athena: https://www.athenaeducation.co.in/article.aspx Role Overview We are looking for an AI/ML Engineer who can mentor high-potential scholars in creating impactful technology projects. This role requires a blend of strong engineering expertise, the ability to distill complex topics into digestible concepts, and a deep passion for student-driven innovation. You’ll help scholars explore the frontiers of AI—from machine learning models to generative AI systems—while coaching them in best practices and applied engineering. Key Responsibilities: Guide scholars through the full AI/ML development cycle—from problem definition, data exploration, and model selection to evaluation and deployment. Teach and assist in building: Supervised and unsupervised machine learning models. Deep learning networks (CNNs, RNNs, Transformers). NLP tasks such as classification, summarization, and Q&A systems. Provide mentorship in Prompt Engineering: Craft optimized prompts for generative models like GPT-4 and Claude. Teach the principles of few-shot, zero-shot, and chain-of-thought prompting. Experiment with fine-tuning and embeddings in LLM applications. Support scholars with real-world datasets (e.g., Kaggle, open data repositories) and help integrate APIs, automation tools, or ML Ops workflows. Conduct internal training and code reviews, ensuring technical rigor in projects. Stay updated with the latest research, frameworks, and tools in the AI ecosystem. Technical Requirements: Proficiency in Python and ML libraries: scikit-learn, XGBoost, Pandas, NumPy. Experience with deep learning frameworks : TensorFlow, PyTorch, Keras. Strong command of machine learning theory , including: Bias-variance tradeoff, regularization, and model tuning. Cross-validation, hyperparameter optimization, and ensemble techniques. Solid understanding of data processing pipelines , data wrangling, and visualization (Matplotlib, Seaborn, Plotly). Advanced AI & NLP Experience with transformer architectures (e.g., BERT, GPT, T5, LLaMA). Hands-on with LLM APIs : OpenAI (ChatGPT), Anthropic, Cohere, Hugging Face. Understanding of embedding-based retrieval , vector databases (e.g., Pinecone, FAISS), and Retrieval-Augmented Generation (RAG). Familiarity with AutoML tools , MLflow, Weights & Biases, and cloud AI platforms (AWS SageMaker, Google Vertex AI). Prompt Engineering & GenAI Proficiency in crafting effective prompts using: Instruction tuning Role-playing and system prompts Prompt chaining tools like LangChain or LlamaIndex Understanding of AI safety , bias mitigation, and interpretability. Required Qualifications: Bachelor’s degree from a Tier-1 Engineering College in Computer Science, Engineering, or a related field. 2-5 years of relevant experience in ML/AI roles. Portfolio of projects or publications in AI/ML (GitHub, blogs, competitions, etc.) Passion for education, mentoring , and working with high school scholars. Excellent communication skills, with the ability to convey complex concepts to a diverse audience. Preferred Qualifications: Prior experience in student mentorship, teaching, or edtech. Exposure to Arduino, Raspberry Pi, or IoT for integrated AI/ML projects. Strong storytelling and documentation abilities to help scholars write compelling project reports and research summaries. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

India

Remote

About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as a Data Scientist and help fast-growing startups transform data into actionable insights, predictive models, and intelligent decision-making tools. You’ll work on real-world data challenges across domains like marketing, finance, healthtech, and AI—with full flexibility to work remotely and choose the engagements that best fit your goals. Role Overview As a Data Scientist, you will: Extract Insights from Data: Analyze complex datasets to uncover trends, patterns, and opportunities. Build Predictive Models: Develop, validate, and deploy machine learning models that solve core business problems. Communicate Clearly: Work with cross-functional teams to present findings and deliver data-driven recommendations. What You’ll Do Analytics & Modeling: Explore, clean, and analyze structured and unstructured data using statistical and ML techniques. Build predictive and classification models using tools like scikit-learn, XGBoost, TensorFlow, or PyTorch. Conduct A/B testing, customer segmentation, forecasting, and anomaly detection. Data Storytelling & Collaboration: Present complex findings in a clear, actionable way using data visualizations (e.g., Tableau, Power BI, Matplotlib). Work with product, marketing, and engineering teams to integrate models into applications or workflows. Technical Requirements & Skills Experience: 3+ years in data science, analytics, or a related field. Programming: Proficient in Python (preferred), R, and SQL. ML Frameworks: Experience with scikit-learn, TensorFlow, PyTorch, or similar tools. Data Handling: Strong understanding of data preprocessing, feature engineering, and model evaluation. Visualization: Familiar with visualization tools like Matplotlib, Seaborn, Plotly, Tableau, or Power BI. Bonus: Experience working with large datasets, cloud platforms (AWS/GCP), or MLOps practices. What We’re Looking For A data-driven thinker who can go beyond numbers to tell meaningful stories. A freelancer who enjoys solving real business problems using machine learning and advanced analytics. A strong communicator with the ability to simplify complex models for stakeholders. Why Join Us? Immediate Impact: Work on projects that directly influence product, growth, and strategy. Remote & Flexible: Choose your working hours and project commitments. Future Opportunities: BeGig will continue matching you with data science roles aligned to your strengths. Dynamic Network: Collaborate with startups building data-first, insight-driven products. Ready to turn data into decisions? Apply now to become a key Data Scientist for our client and a valued member of the BeGig network! Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

As a Data Scientist, you will work with a cross-functional team to identify business challenges and provide data-driven insights. You'll be responsible for data exploration, feature engineering, model development, and production deployment of machine learning solutions. We are seeking someone passionate about working with diverse datasets and applying machine learning techniques to deliver meaningful results. Key Responsibilities: • Collaborate with internal teams to understand business requirements and translate them into data-driven solutions. • Perform exploratory data analysis, data cleaning, and transformation. • Develop and deploy machine learning models and algorithms for predictive and prescriptive analysis. • Conduct A/B testing and evaluate the impact of model implementations. • Generate data visualizations and reports to communicate insights to stakeholders. • Stay updated with the latest developments in data science, machine learning, and industry trends. Qualifications: • Bachelor's degree in Data Science, Computer Science, Mathematics, Statistics, or a related field. • 4+ years of experience working as a Data Scientist or in a similar role. • Strong proficiency in Python or R, and experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch. • Knowledge of data processing frameworks, e.g., Pandas, NumPy, Spark. • Experience with data visualization tools like Tableau, Power BI, or Matplotlib. • Ability to query databases using SQL and familiarity with relational databases. • Familiarity with cloud platforms such as AWS, Azure, or GCP is a plus. • Strong analytical, problem-solving, and communication skills. • Fluent in Spanish and/or Portuguese, with good English proficiency. Must-Have Skills: Excellent English Communication Skills Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Join us as a "Chief Control Office" at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unapparelled customer experiences. You may be assessed on the key critical skills relevant for success in role, such as experience with MS office, SQL, Alteryx, Power Tools, Python as well as job-specific skillsets. To be successful as an "Analyst", you should have experience with: Basic/ Essential Qualifications Graduate in any discipline Experience in Controls, Governance, Reporting and Risk Management preferably in a financial services organisation Proficient in MS Office – PPT, Excel, Work & Visio Proficient in SQL, Alteryx and Python Generating Data Insights and Dashboards from large and diverse data sets Excellent experience on Tableau, Alteryx, MS Office (i.e. Advance Excel, PowerPoint) Automation skills using VBA, PowerQuery, PowerApps, etc. Experience in using ETL tools. Good understanding of Risk and Control Excellent communication skills (Verbal and Written) Good understanding of governance and control frameworks and processes Highly motivated, business-focussed and forward thinking. Experience in senior stakeholder management. Ability to manage relationships across multiple disciplines Desirable Skillsets/ Good To Have Experience in data crunching/ analysis including automation Experience in handling RDBMS (i.e. SQL/Oracle) Experience in Python, Data Science and Data Analytics Tools and Techniques e.g. MatPlotLib, Data Wrangling, Low Code/No Code environment development preferably in large bank on actual use cases Understanding of Data Management Principles and data governance Design and managing SharePoints Financial Services experience This role will be based out of Pune. Purpose of the role To design, develop and consult on the bank’s internal controls framework and supporting policies and standards across the organisation, ensuring it is robust, effective, and aligned to the bank’s overall strategy and risk appetite. Accountabilities Identification and analysis of emerging and evolving risks across functions to understand their potential impact, and likelihood. Communication of the purpose, structure, and importance of the control framework to all relevant stakeholders, including senior management and audit. Support to the development and implementation of the bank's internal controls framework and principles tailored to the banks specific needs and risk profile including design, monitoring, and reporting initiatives . Monitoring and maintenance of the control's frameworks, to ensure compliance and adjust and update as internal and external requirements change. Embedment of the control framework across the bank through cross collaboration, training sessions and awareness campaigns which fosters a culture of knowledge sharing and improvement in risk management and the importance of internal control effectiveness. Analyst Expectations To meet the needs of stakeholders/ customers through specialist advice and support Perform prescribed activities in a timely manner and to a high standard which will impact both the role itself and surrounding roles. Likely to have responsibility for specific processes within a team They may lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. They supervise a team, allocate work requirements and coordinate team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they manage own workload, take responsibility for the implementation of systems and processes within own work area and participate on projects broader than direct team. Execute work requirements as identified in processes and procedures, collaborating with and impacting on the work of closely related teams. Check work of colleagues within team to meet internal and stakeholder requirements. Provide specialist advice and support pertaining to own work area. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how all teams in area contribute to the objectives of the broader sub-function, delivering impact on the work of collaborating teams. Continually develop awareness of the underlying principles and concepts on which the work within the area of responsibility is based, building upon administrative / operational expertise. Make judgements based on practise and previous experience. Assess the validity and applicability of previous or similar experiences and evaluate options under circumstances that are not covered by procedures. Communicate sensitive or difficult information to customers in areas related specifically to customer advice or day to day administrative requirements. Build relationships with stakeholders/ customers to identify and address their needs. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies