Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 4.0 years
6 - 11 Lacs
Hyderabad
Work from Office
:. Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more. . We are seeking a Deep Learning Engineer with at least 3 years of experience to work on building state-of-the-art deep learning models for various applications like computer vision, NLP, and recommendation systems. Key Responsibilities:. Develop and train deep learning models for various use cases. Optimize model performance and ensure scalability. Collaborate with the data science and engineering teams to integrate deep learning models into production systems. Required Qualifications:. 3+ years of experience in deep learning and machine learning. Expertise in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong programming skills in Python and experience with GPU computing. Why Join Us. Competitive pay (‚1200/hour). Flexible hours. Remote opportunity. NOTEPay will vary by project and typically is up to Rs. 1200 per hour (if you work an average of 3 hours every day that could be as high as Rs. . Shape the future of AI with Soul AI!.
Posted 1 month ago
2.0 - 5.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Step into the world of AI innovation with the Deccan AI Experts Community (By Soul AI), where you become a creator, not just a consumer. We are reaching out to the top 1% of Soul AIs Data Visualization Engineers like YOU for a unique job opportunity to work with the industry leaders. Whats in it for you. pay above market standards. The role is going to be contract based with project timelines from 2 6 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote. Onsite on client locationUS, UAE, UK, India etc. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Architect and implement enterprise-level BI solutions to support strategic decision-making along with data democratization by enabling self-service analytics for non-technical users. Lead data governance and data quality initiatives to ensure consistency and design data pipelines and automated reporting solutions using SQL and Python. Optimize big data queries and analytics workloads for cost efficiency and Implement real-time analytics dashboards and interactive reports. Mentor junior analysts and establish best practices for data visualization. Required Skills: . Advanced SQL, Python (Pandas, NumPy), and BI tools (Tableau, Power BI, Looker). Expertise in AWS (Athena, Redshift), GCP (Big Query), or Snowflake. Experience with data governance, lineage tracking, and big data tools (Spark, Kafka). Exposure to machine learning and AI-powered analytics. Nice to Have:. Experience with graph analytics, geospatial data, and visualization libraries (D3.js, Plotly). Hands-on experience with BI automation and AI-driven analytics. Who can be a part of the community. We are looking for top-tier Data Visualization Engineers with expertise in analyzing and visualizing complex datasets. Proficiency in SQL, Tableau, Power BI, and Python (Pandas, NumPy, Matplotlib) is a plus. If you have experience in this field then this is your chance to collaborate with industry leaders. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matchingBe patient while we align your skills and preferences with the available project. Project AllocationYoull be deployed on your preferred project!. Skip the Noise. Focus on Opportunities Built for You!.
Posted 1 month ago
1.0 - 4.0 years
6 - 11 Lacs
Bengaluru
Work from Office
:. Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more. . We are seeking a Deep Learning Engineer with at least 3 years of experience to work on building state-of-the-art deep learning models for various applications like computer vision, NLP, and recommendation systems. Key Responsibilities:. Develop and train deep learning models for various use cases. Optimize model performance and ensure scalability. Collaborate with the data science and engineering teams to integrate deep learning models into production systems. Required Qualifications:. 3+ years of experience in deep learning and machine learning. Expertise in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong programming skills in Python and experience with GPU computing. Why Join Us. Competitive pay (‚1200/hour). Flexible hours. Remote opportunity. NOTEPay will vary by project and typically is up to Rs. 1200 per hour (if you work an average of 3 hours every day that could be as high as Rs. . Shape the future of AI with Soul AI!.
Posted 1 month ago
2.0 - 5.0 years
7 - 11 Lacs
Mumbai
Work from Office
Step into the world of AI innovation with the Deccan AI Experts Community (By Soul AI), where you become a creator, not just a consumer. We are reaching out to the top 1% of Soul AIs Data Visualization Engineers like YOU for a unique job opportunity to work with the industry leaders. Whats in it for you. pay above market standards. The role is going to be contract based with project timelines from 2 6 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote. Onsite on client locationUS, UAE, UK, India etc. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Architect and implement enterprise-level BI solutions to support strategic decision-making along with data democratization by enabling self-service analytics for non-technical users. Lead data governance and data quality initiatives to ensure consistency and design data pipelines and automated reporting solutions using SQL and Python. Optimize big data queries and analytics workloads for cost efficiency and Implement real-time analytics dashboards and interactive reports. Mentor junior analysts and establish best practices for data visualization. Required Skills: . Advanced SQL, Python (Pandas, NumPy), and BI tools (Tableau, Power BI, Looker). Expertise in AWS (Athena, Redshift), GCP (Big Query), or Snowflake. Experience with data governance, lineage tracking, and big data tools (Spark, Kafka). Exposure to machine learning and AI-powered analytics. Nice to Have:. Experience with graph analytics, geospatial data, and visualization libraries (D3.js, Plotly). Hands-on experience with BI automation and AI-driven analytics. Who can be a part of the community. We are looking for top-tier Data Visualization Engineers with expertise in analyzing and visualizing complex datasets. Proficiency in SQL, Tableau, Power BI, and Python (Pandas, NumPy, Matplotlib) is a plus. If you have experience in this field then this is your chance to collaborate with industry leaders. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matchingBe patient while we align your skills and preferences with the available project. Project AllocationYoull be deployed on your preferred project!. Skip the Noise. Focus on Opportunities Built for You!.
Posted 1 month ago
1.0 - 4.0 years
6 - 11 Lacs
Mumbai
Work from Office
:. Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more. . We are seeking a Deep Learning Engineer with at least 3 years of experience to work on building state-of-the-art deep learning models for various applications like computer vision, NLP, and recommendation systems. Key Responsibilities:. Develop and train deep learning models for various use cases. Optimize model performance and ensure scalability. Collaborate with the data science and engineering teams to integrate deep learning models into production systems. Required Qualifications:. 3+ years of experience in deep learning and machine learning. Expertise in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong programming skills in Python and experience with GPU computing. Why Join Us. Competitive pay (‚1200/hour). Flexible hours. Remote opportunity. NOTEPay will vary by project and typically is up to Rs. 1200 per hour (if you work an average of 3 hours every day that could be as high as Rs. . Shape the future of AI with Soul AI!.
Posted 1 month ago
2.0 - 5.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Step into the world of AI innovation with the Deccan AI Experts Community (By Soul AI), where you become a creator, not just a consumer. We are reaching out to the top 1% of Soul AIs Data Visualization Engineers like YOU for a unique job opportunity to work with the industry leaders. Whats in it for you. pay above market standards. The role is going to be contract based with project timelines from 2 6 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote. Onsite on client locationUS, UAE, UK, India etc. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Architect and implement enterprise-level BI solutions to support strategic decision-making along with data democratization by enabling self-service analytics for non-technical users. Lead data governance and data quality initiatives to ensure consistency and design data pipelines and automated reporting solutions using SQL and Python. Optimize big data queries and analytics workloads for cost efficiency and Implement real-time analytics dashboards and interactive reports. Mentor junior analysts and establish best practices for data visualization. Required Skills: . Advanced SQL, Python (Pandas, NumPy), and BI tools (Tableau, Power BI, Looker). Expertise in AWS (Athena, Redshift), GCP (Big Query), or Snowflake. Experience with data governance, lineage tracking, and big data tools (Spark, Kafka). Exposure to machine learning and AI-powered analytics. Nice to Have:. Experience with graph analytics, geospatial data, and visualization libraries (D3.js, Plotly). Hands-on experience with BI automation and AI-driven analytics. Who can be a part of the community. We are looking for top-tier Data Visualization Engineers with expertise in analyzing and visualizing complex datasets. Proficiency in SQL, Tableau, Power BI, and Python (Pandas, NumPy, Matplotlib) is a plus. If you have experience in this field then this is your chance to collaborate with industry leaders. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matchingBe patient while we align your skills and preferences with the available project. Project AllocationYoull be deployed on your preferred project!. Skip the Noise. Focus on Opportunities Built for You!.
Posted 1 month ago
2.0 - 5.0 years
7 - 11 Lacs
Kolkata
Work from Office
Step into the world of AI innovation with the Deccan AI Experts Community (By Soul AI), where you become a creator, not just a consumer. We are reaching out to the top 1% of Soul AIs Data Visualization Engineers like YOU for a unique job opportunity to work with the industry leaders. Whats in it for you. pay above market standards. The role is going to be contract based with project timelines from 2 6 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote. Onsite on client locationUS, UAE, UK, India etc. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Architect and implement enterprise-level BI solutions to support strategic decision-making along with data democratization by enabling self-service analytics for non-technical users. Lead data governance and data quality initiatives to ensure consistency and design data pipelines and automated reporting solutions using SQL and Python. Optimize big data queries and analytics workloads for cost efficiency and Implement real-time analytics dashboards and interactive reports. Mentor junior analysts and establish best practices for data visualization. Required Skills: . Advanced SQL, Python (Pandas, NumPy), and BI tools (Tableau, Power BI, Looker). Expertise in AWS (Athena, Redshift), GCP (Big Query), or Snowflake. Experience with data governance, lineage tracking, and big data tools (Spark, Kafka). Exposure to machine learning and AI-powered analytics. Nice to Have:. Experience with graph analytics, geospatial data, and visualization libraries (D3.js, Plotly). Hands-on experience with BI automation and AI-driven analytics. Who can be a part of the community. We are looking for top-tier Data Visualization Engineers with expertise in analyzing and visualizing complex datasets. Proficiency in SQL, Tableau, Power BI, and Python (Pandas, NumPy, Matplotlib) is a plus. If you have experience in this field then this is your chance to collaborate with industry leaders. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matchingBe patient while we align your skills and preferences with the available project. Project AllocationYoull be deployed on your preferred project!. Skip the Noise. Focus on Opportunities Built for You!.
Posted 1 month ago
1.0 - 4.0 years
6 - 11 Lacs
Kolkata
Work from Office
:. Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more. . We are seeking a Deep Learning Engineer with at least 3 years of experience to work on building state-of-the-art deep learning models for various applications like computer vision, NLP, and recommendation systems. Key Responsibilities:. Develop and train deep learning models for various use cases. Optimize model performance and ensure scalability. Collaborate with the data science and engineering teams to integrate deep learning models into production systems. Required Qualifications:. 3+ years of experience in deep learning and machine learning. Expertise in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong programming skills in Python and experience with GPU computing. Why Join Us. Competitive pay (‚1200/hour). Flexible hours. Remote opportunity. NOTEPay will vary by project and typically is up to Rs. 1200 per hour (if you work an average of 3 hours every day that could be as high as Rs. . Shape the future of AI with Soul AI!.
Posted 1 month ago
200.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description You are a strategic thinker passionate about driving solution. You have found the right team. As an Associate within the VCG team, your primary responsibility will be to work on automation and redesign of existing implementations using Python. Alteryx skills are considered a plus. Job Responsibilities Automate Excel tasks by developing Python scripts with openpyxl, pandas, and xlrd, focusing on data extraction, transformation, and generating reports with charts and pivot tables. Design and deploy interactive web applications using Streamlit, enabling real-time data interaction and integrating advanced analytics. Use Matplotlib and Seaborn to create charts and graphs, adding interactive features for dynamic data exploration tailored to specific business needs. Design intuitive user interfaces with PyQt or Flask, integrating data visualizations and ensuring secure access through authentication mechanisms. Perform data manipulation and exploratory analysis using Pandas and NumPy, and develop data pipelines to maintain data quality and support analytics. Write scripts to connect to external APIs, process data in JSON and XML formats, and ensure reliable data retrieval with robust error handling. Collaborate with cross-functional teams to gather requirements, provide technical guidance, and ensure alignment on project goals, fostering open communication. Demonstrate excellent problem-solving skills and the ability to troubleshoot and resolve technical issues. Adhere to the control, governance, and development standards for intelligent solutions. Strong communication skills and the ability to work collaboratively with different teams. Required Qualifications, Capabilities, And Skills Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience in Python programming and automation. Experience with Python libraries such as Pandas, NumPy, PyQt, Streamlit, Matplotlib, Seaborn, openpyxl, xlrd, Flask, PyPDF2, pdfplumber and SQLite . Analytical, quantitative aptitude, and attention to detail. Strong verbal and written communication skills. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Global Finance & Business Management works to strategically manage capital, drive growth and efficiencies, maintain financial reporting and proactively manage risk. By providing information, analysis and recommendations to improve results and drive decisions, teams ensure the company can navigate all types of market conditions while protecting our fortress balance sheet.
Posted 1 month ago
0 years
0 Lacs
India
Remote
Job Title: Machine Learning Trainee – Intern Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month About the Role We are looking for a passionate and self-motivated Machine Learning Trainee (Intern) to join our team and gain hands-on experience in building and deploying machine learning models. As a trainee, you’ll work alongside experienced data scientists and engineers on real-world datasets and contribute to projects that have a meaningful impact. Key Responsibilities Assist in data preprocessing, cleaning, and feature engineering for various ML tasks. Support the development, training, testing, and evaluation of machine learning models. Participate in model deployment and performance monitoring. Conduct literature reviews and research to identify appropriate ML algorithms. Visualize and interpret results using tools like Matplotlib, Seaborn, or Power BI. Document workflows, experiments, and outcomes clearly and concisely. Requirements Pursuing or recently completed a degree in Computer Science, Data Science, Engineering, or a related field. Strong knowledge of Python and popular ML libraries (scikit-learn, pandas, NumPy, etc.). Basic understanding of machine learning concepts such as supervised and unsupervised learning, model evaluation, and overfitting. Familiarity with Jupyter Notebooks, version control (Git), and data handling. Good problem-solving and analytical skills. Preferred Qualifications Exposure to deep learning frameworks (TensorFlow, PyTorch) is a plus. Understanding of cloud platforms (AWS, GCP, or Azure) is a bonus. Prior hands-on experience through academic or personal ML projects. What You’ll Gain Hands-on experience with real-world ML problems and datasets. Mentorship from industry professionals. Certificate of Internship upon successful completion. A strong foundation to pursue advanced roles in AI and ML.
Posted 1 month ago
3.0 - 8.0 years
8 - 13 Lacs
Mumbai
Work from Office
: In Scope of Position based Promotions (INTERNAL only) Job TitleCapital & Liquidity Management Analyst LocationMumbai, India Corporate TitleAnalyst Role Description Group Capital Management plays a central role in the execution of DBs strategy. While Group Capital Management manages DB Groups solvency ratios (CET 1, T1, Total capital ratio, leverage ratio, MREL/TLAC ratios, ECA ratio) together with business divisions and other infrastructure functions, EMEA Treasury manages in addition to the solvency ratios of DBs EMEA entities also the liquidity ratios and Treasury Pool activities. Thereby, EMEA Treasury links into DB Groups strategy and manages execution on a local level. Treasury Treasury at Deutsche Bank is responsible for the sourcing, management and optimization of liquidity and capital to deliver high value risk management decisions. This is underpinned by a best-in-class integrated and consistent Treasury risk framework, which enables Treasury to clearly identify the Banks resource demands, transparently set incentives by allocating resource costs to businesses and manage to evolving regulation. Treasurys fiduciary mandate, which encompasses the Banks funding pools, Asset and liability management (ALM) and fiduciary buffer management, supports businesses in delivering their strategic targets at global and local level. Further Treasury manages the optimization of all financial resources through all lenses to implement the groups strategic objective and maximize long term return on average tangible shareholders equity (RoTE). The current role is part of the Treasury Office in DBC Mumbai. The role requires interactions with all key hubs i.e. London, New York, Frankfurt and Singapore. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities The core deliverables for this role are Write code and implement solution based on specifications. Update, design and implement changes to existing software architecture. Build complex enhancements and resolve bugs. Build and execute unit tests and unit plans. Implementation tasks are varied and complex needing independent judgment. Build a technology solution which is sustainable, repeatable, agile. Align with business and gain understanding of different treasury functions. Your skills and experience Must have core capabilities of strong development experience in Python and Oracle based application Strong in Algorithm, Data Structures and SQL Some experience with Integration/build/testing tools Good to have working knowledge of visualization libraries like plotly, matplotlib, seaborn etc. Exposure to webservice, webserver/application server-based development would be added advantage but not mandatory A basic understanding of Balance sheet and Treasury concepts is desirable but not mandatory Effective organizational and interpersonal skills Self-starting willingness to get things done A highly motivated team player with strong technical background and good communication skills Urgency Prioritize based on need of hour An aptitude to learn new tools and technologies Engineering graduate / BS or MS degree or equivalent experience relevant to functional area 3 + years software engineering or related experience is a must How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 month ago
4.0 years
0 Lacs
India
Remote
Job Title: AI/ML Engineer Experience: 4+ Years Location: Remote Job Type: Full-Time Job Summary: We are looking for a passionate and results-driven AI/ML Engineer with 4 years of experience in designing, building, and deploying machine learning models and intelligent systems. The ideal candidate should have solid programming skills, a strong grasp of data preprocessing, model evaluation, and MLOps practices. You will collaborate with cross-functional teams including data scientists, software engineers, and product managers to integrate intelligent features into applications and systems. Key Responsibilities: Design, develop, train, and optimize machine learning and deep learning models for real-world applications. Preprocess, clean, and transform structured and unstructured data for model training and evaluation. Implement, test, and deploy models using APIs or microservices (Flask, FastAPI, etc.) in production environments. Use ML libraries and frameworks like Scikit-learn, TensorFlow, PyTorch, Hugging Face, XGBoost, etc. Monitor and retrain models as needed for performance, accuracy, and drift mitigation. Collaborate with software and data engineering teams to operationalize ML solutions using MLOps tools. Stay updated with emerging trends in AI/ML and suggest enhancements to existing systems. Required Skills and Qualifications: Bachelor’s or Master’s in Computer Science, Engineering, AI/ML, Data Science, or related field. 4+ years of hands-on experience in machine learning model development and deployment. Strong experience in Python and libraries like Pandas, NumPy, Scikit-learn, Matplotlib/Seaborn. Experience with deep learning frameworks such as TensorFlow, PyTorch, or Keras. Proficiency in model deployment using Flask, FastAPI, Docker, and REST APIs. Experience with version control (Git), model versioning, and experiment tracking (MLflow, Weights & Biases). Familiarity with cloud platforms like AWS (SageMaker), Azure ML, or GCP AI Platform. Knowledge of databases (SQL/NoSQL) and data pipelines (Airflow, Spark, etc.). Strong problem-solving and debugging skills, with an analytical mindset.
Posted 1 month ago
6.0 - 8.0 years
0 Lacs
Greater Chennai Area
On-site
About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, European Union’s leading bank with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 10000 employees, to provide support and develop best-in-class solutions. About BNP Paribas Group BNP Paribas is the European Union’s leading bank and key player in international banking. It operates in 65 countries and has nearly 185,000 employees, including more than 145,000 in Europe. The Group has key positions in its three main fields of activity: Commercial, Personal Banking & Services for the Group’s commercial & personal banking and several specialised businesses including BNP Paribas Personal Finance and Arval; Investment & Protection Services for savings, investment, and protection solutions; and Corporate & Institutional Banking, focused on corporate and institutional clients. Based on its strong diversified and integrated model, the Group helps all its clients (individuals, community associations, entrepreneurs, SMEs, corporates and institutional clients) to realize their projects through solutions spanning financing, investment, savings and protection insurance. In Europe, BNP Paribas has four domestic markets: Belgium, France, Italy, and Luxembourg. The Group is rolling out its integrated commercial & personal banking model across several Mediterranean countries, Turkey, and Eastern Europe. As a key player in international banking, the Group has leading platforms and business lines in Europe, a strong presence in the Americas as well as a solid and fast-growing business in Asia-Pacific. BNP Paribas has implemented a Corporate Social Responsibility approach in all its activities, enabling it to contribute to the construction of a sustainable future, while ensuring the Group's performance and stability Commitment to Diversity and Inclusion At BNP Paribas, we passionately embrace diversity and are committed to fostering an inclusive workplace where all employees are valued, respected and can bring their authentic selves to work. We prohibit Discrimination and Harassment of any kind and our policies promote equal employment opportunity for all employees and applicants, irrespective of, but not limited to their gender, gender identity, sex, sexual orientation, ethnicity, race, colour, national origin, age, religion, social status, mental or physical disabilities, veteran status etc. As a global Bank, we truly believe that inclusion and diversity of our teams is key to our success in serving our clients and the communities we operate in. About Business Line/Function The Intermediate Holding Company (“IHC”) program structured at the U.S. level across poles of activities of BNP Paribas provides guidance, supports the analysis, impact assessment and drives adjustments of the U.S. platform’s operating model due to the drastic changes introduced by the Enhanced Prudential Standards (“EPS”) for Foreign Banking Organizations (“FBOs”) finalized by the Federal Reserve in February 2014, implementing Section 165 of U.S. Dodd-Frank Act. The IT Transversal Team is part of the Information Technology Group which works simultaneously on a wide range of projects arising from business, strategic initiatives, and regulatory changes and reengineering of existing applications to improve functionality and efficiency. Job Title Python Developer Date June-25 Department ITG- Fresh Location: Chennai, Mumbai Business Line / Function Finance Dedicated Solutions Reports To (Direct) Grade (if applicable) (Functional) Number Of Direct Reports NA Directorship / Registration NA Position Purpose The Python Developer will play a critical role in building and maintaining financial applications and tools that support data processing, analysis, and reporting within a fast-paced financial services environment. This position involves developing scalable and secure systems. The developer will collaborate with business analysts, finance users/or finance BA to translate complex business requirements into efficient, high-quality software solutions. A strong understanding of financial concepts, data integrity, and regulatory compliance is essential. The detailed responsibilities are mentioned below. Responsibilities Direct Responsibilities Proficient in object-oriented programming, especially Python, with a minimum of 6-8 years of core python development experience. Strong competency with Python libraries such as Pandas and NumPy for data wrangling, analysis, and manipulation. Expertise in PySpark for large-scale data processing and loading into databases. Proficiency in data querying and manipulation with Oracle and PostgreSQL. Strong communication skills to effectively collaborate with team members and stakeholders. Familiarity with the Software Development Life Cycle (SDLC) process and its various stages, including experience with JIRA and Confluence. Technical & Behavioral Competencies Proficient in object-oriented programming, especially Python, with a minimum of 6-8 years of core python development experience. Strong competency with Python libraries such as Pandas and NumPy for data wrangling, analysis, and manipulation. Expertise in PySpark for large-scale data processing and loading into databases. Proficiency in data querying and manipulation with Oracle and PostgreSQL. Strong communication skills to effectively collaborate with team members and stakeholders. Familiarity with the Software Development Life Cycle (SDLC) process and its various stages, including experience with JIRA and Confluence. Good analytical, problem solving, & communication skills Engage in technical discussions and to help in improving the system, process etc Nice to Have Familiarity with Plotly and Matplotlib for data visualization of large datasets. Skilled in API programming, handling JSON, CSV, and other unstructured data from various systems. Familiarity with JavaScript, CSS, and HTML. Experience with cloud architecture applications such as Dataiku or Databricks; competency with ETL tools. Knowledge of regulatory frameworks, RISK, CCAR, and GDPR. Skills Referential Specific Qualifications (if required) Behavioural Skills: (Please select up to 4 skills) Ability to collaborate / Teamwork Critical thinking Ability to deliver / Results driven Communication skills - oral & written Transversal Skills: (Please select up to 5 skills) Analytical Ability Ability to develop and adapt a process Ability to understand, explain and support change Ability To Develop Others & Improve Their Skills Choose an item. Education Level Bachelor Degree or equivalent Experience Level At least 5 years
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Exp – 15 to 23yrs Location : Chennai /Bangalore Primary skill :- GEN AI Architect, Building GEN AI solutions, Coding, AI ML background, Data engineering, Azure or AWS cloud Job Description : The Generative Solutions Architect will be responsible for designing and implementing cutting-edge generative AI models and systems. He / She will collaborate with data scientists, engineers, product managers, and other stakeholders to develop innovative AI solutions for various applications including natural language processing (NLP), computer vision, and multimodal learning. This role requires a deep understanding of AI/ML theory, architecture design, and hands-on expertise with the latest generative models. Key Responsibilities : GenAI application conceptualization and design: Understand the use cases under consideration, conceptualization of the application flow, understanding the constraints and designing accordingly to get the most optimized results. Deep knowledge to work on developing and implementing applications using Retrieval-Augmented Generation (RAG)-based models, which combine the power of large language models (LLMs) with information retrieval techniques. Prompt Engineering: Be adept at prompt engineering and its various nuances like one-shot, few shot, chain of thoughts etc and have hands on knowledge of implementing agentic workflow and be aware of agentic AI concepts NLP and Language Model Integration - Apply advanced NLP techniques to preprocess, analyze, and extract meaningful information from large textual datasets. Integrate and leverage large language models such as LLaMA2/3, Mistral or similar offline LLM models to address project-specific goals. Small LLMs / Tiny LLMs: Familiarity and understanding of usage of SLMs / Tiny LLMs like phi3, OpenELM etc and their performance characteristics and usage requirements and nuances of how they can be consumed by use case applications. Collaboration with Interdisciplinary Teams - Collaborate with cross-functional teams, including linguists, developers, and subject matter experts, to ensure seamless integration of language models into the project workflow. Text / Code Generation and Creative Applications - Explore creative applications of large language models, including text / code generation, summarization, and context-aware responses. Skills & Tools Programming Languages - Proficiency in Python for data analysis, statistical modeling, and machine learning. Machine Learning Libraries - Hands-on experience with machine learning libraries such as scikit-learn, Huggingface, TensorFlow, and PyTorch. Statistical Analysis - Strong understanding of statistical techniques and their application in data analysis. Data Manipulation and Analysis - Expertise in data manipulation and analysis using Pandas and NumPy. Database Technologies - Familiarity with vector databases like ChromaDB, Pinecone etc, SQL and Non-SQL databases and experience working with relational and non-relational databases. Data Visualization Tools - Proficient in data visualization tools such as Tableau, Matplotlib, or Seaborn. Familiarity with cloud platforms (AWS, Google Cloud, Azure) for model deployment and scaling. Communication Skills - Excellent communication skills with the ability to convey technical concepts to non-technical audiences.
Posted 1 month ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Are you passionate about turning complex data into actionable insights? We're looking for a creative and analytical Data Analyst to join our team and help shape data-driven strategies across the business. This is an exciting opportunity to work in a dynamic, collaborative environment where your models and analyses will directly influence high-impact decisions. Dassault Systèmes, the 3DEXPERIENCE Company, provides businesses and people with virtual universes to imagine sustainable innovations. Our 3DEXPERIENCE platform leverages the Company’s world-leading 3D software applications to transform the way products are designed, produced, and supported. With its online architecture, the 3DEXPERIENCE environment helps businesses to test and evaluate — anywhere in the development lifecycle of a product or service — the eventual experience they will deliver to their customers. In short, 3DEXPERIENCE powers the next-generation capabilities that drive today’s Experience Economy. Role Description & Responsibilities Analyze large, complex datasets to uncover insights, trends, and opportunities that drive strategic decisions Build and deploy predictive models using machine learning techniques (e.g., regression, classification, clustering) Perform data wrangling, preprocessing, and cleaning to prepare data for analysis and modeling Design and execute experiments (e.g., A/B testing) to evaluate hypotheses and business initiatives Communicate findings and recommendations clearly to both technical and non-technical stakeholders through reports, dashboards, and presentations Collaborate closely with data engineers, product managers, and business teams to define data requirements and deliver end-to-end solutions Stay up to date with industry trends, tools, and best practices in data science and analytics Qualifications Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field 8+ years of experience in a data science or quantitative analytics role with a good understanding of machine learning and operations research Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience in data mining across different industry segments like Automobile, Infrastructure & Equipment, Aerospace & Defense, manufacturing, Mining etc Experience with data visualization tools (e.g., Tableau, Power BI, matplotlib, Seaborn) Excellent communication skills, with the ability to explain complex data insights in a clear and actionable way What’s In It For You Professional Growth: Opportunity to advance within the organization Learning Environment: Access to training, workshops, and skill development Collaboration: Work closely with cross-functional teams Company Culture: Work in a culture of collaboration and innovation Interested? Click on "Apply" to upload your application documents. Inclusion statement As a game-changer in sustainable technology and innovation, Dassault Systèmes is striving to build more inclusive and diverse teams across the globe. We believe that our people are our number one asset and we want all employees to feel empowered to bring their whole selves to work every day. It is our goal that our people feel a sense of pride and a passion for belonging. As a company leading change, it’s our responsibility to foster opportunities for all people to participate in a harmonized Workforce of the Future.
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Details Category: Data Science Location : Bangalore Experience Level: 4-8 Years Position Description We are looking for a Data Engineer who will play a pivotal role in transforming raw data into actionable intelligence through sophisticated data pipelines and machine learning deployment frameworks. They will collaborate across functions to understand business objectives, engineer data solutions, and ensure robust AI/ML model deployment and monitoring. This role is ideal for someone passionate about data science, MLOps, and building scalable data ecosystems in cloud environments. Key Responsibilities Data Engineering & Data Science: Preprocess structured and unstructured data to prepare for AI/ML model development. Apply strong skills in feature engineering, data augmentation, and normalization techniques. Manage and manipulate data using SQL, NoSQL, and cloud-based data storage solutions such as Azure Data Lake. Design and implement efficient ETL pipelines, data wrangling, and data transformation strategies. Model Deployment & MLOps Deploy ML models into production using Azure Machine Learning (Azure ML) and Kubernetes. Implement MLOps best practices, including CI/CD pipelines, model versioning, and monitoring frameworks. Design mechanisms for model performance monitoring, alerting, and retraining. Utilize containerization technologies (Docker/Kubernetes) to support deployment and scalability Business & Analytics Insights Work closely with stakeholders to understand business KPIs and decision-making frameworks. Analyze large datasets to identify trends, patterns, and actionable insights that inform business strategies. Develop data visualizations using tools like Power BI, Tableau, and Matplotlib to communicate insights effectively. Conduct A/B testing and evaluate model performance using metrics such as precision, recall, F1-score, MSE, RMSE, and model validation techniques. Desired Profile Proven experience in data engineering, AI/ML data preprocessing, and model deployment. Strong expertise in working with both structured and unstructured datasets. Hands-on experience with SQL, NoSQL databases, and cloud data platforms (e.g., Azure Data Lake). Deep understanding of MLOps practices, containerization (Docker/Kubernetes), and production-level model deployment. Technical Skills Proficient in ETL pipeline creation, data wrangling, and transformation methods. Strong experience with Azure ML, Kubernetes, and other cloud-based deployment technologies. Excellent knowledge of data visualization tools (Power BI, Tableau, Matplotlib). Expertise in model evaluation and testing techniques, including A/B testing and performance metrics. Soft Skills Strong analytical mindset with the ability to solve complex data-related problems. Ability to collaborate with cross-functional teams to understand business needs and provide actionable insights. Clear communication skills to convey technical details to non-technical stakeholders. If you are passionate to work in a collaborative and challenging environment, apply now!
Posted 1 month ago
2.0 - 6.0 years
4 - 8 Lacs
Mumbai, Chennai, Bengaluru
Work from Office
Location:Bangalore / Gurgaon / Mumbai / Chennai / Pune / Hyderabad / Kolkata Must have skills: A solid understanding of retail industry dynamics, including key performance indicators (KPIs) such as sales trends, customer segmentation, inventory turnover, and promotions. Strong ability to communicate complex data insights to non-technical stakeholders, including senior management, marketing, and operational teams. Meticulous in ensuring data quality, accuracy, and consistency when handling large, complex datasets. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Strong proficiency in Python for data manipulation, statistical analysis, and machine learning (libraries like Pandas, NumPy, Scikit-learn). Expertise in supervised and unsupervised learning algorithms Use advanced analytics to optimize pricing strategies based on market demand, competitor pricing, and customer price sensitivity. Good to have skills: Familiarity with big data processing platforms like Apache Spark, Hadoop, or cloud-based platforms such as AWS or Google Cloud for large-scale data processing. Experience with ETL (Extract, Transform, Load) processes and tools like Apache Airflow to automate data workflows. Familiarity with designing scalable and efficient data pipelines and architecture. Experience with tools like Tableau, Power BI, Matplotlib, and Seaborn to create meaningful visualizations that present data insights clearly. Job Summary : The Retail Specialized Data Scientist will play a pivotal role in utilizing advanced analytics, machine learning, and statistical modeling techniques to help our retail business make data-driven decisions. This individual will work closely with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. The ideal candidate should have experience in retail analytics and the ability to translate data into actionable insights. Roles & Responsibilities: Leverage Retail Knowledge:Utilize your deep understanding of the retail industry (merchandising, customer behavior, product lifecycle) to design AI solutions that address critical retail business needs. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Apply machine learning algorithms, such as classification, clustering, regression, and deep learning, to enhance predictive models. Use AI-driven techniques for personalization, demand forecasting, and fraud detection. Use advanced statistical methods help optimize existing use cases and build new products to serve new challenges and use cases. Stay updated on the latest trends in data science and retail technology. Collaborate with executives, product managers, and marketing teams to translate insights into business actions. Professional & Technical Skills : Strong analytical and statistical skills. Expertise in machine learning and AI. Experience with retail-specific datasets and KPIs. Proficiency in data visualization and reporting tools. Ability to work with large datasets and complex data structures. Strong communication skills to interact with both technical and non-technical stakeholders. A solid understanding of the retail business and consumer behavior. Programming Languages:Python, R, SQL, Scala Data Analysis Tools:Pandas, NumPy, Scikit-learn, TensorFlow, Keras Visualization Tools: Tableau, Power BI, Matplotlib, Seaborn Big Data Technologies:Hadoop, Spark, AWS, Google Cloud Databases: SQL, NoSQL (MongoDB, Cassandra) Additional Information: - Qualification Experience: Minimum 3 year(s) of experience is required Educational Qualification: Bachelors or Master's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field.
Posted 1 month ago
5.0 - 8.0 years
12 - 16 Lacs
Hyderabad
Work from Office
Role Purpose The purpose of this role is to develop minimum viable product (MVP) and comprehensive AI solutions that meet and exceed clients expectations and add value to business. Primary Skill Python (DSA) Exploratory data analysis using pandas,numpy, Seaborn, Matplotlib TensorFlow, PyTorch, Scikit-learn Large-language models, RAG, MCP , Agentic AI Azure Cognitive services /Azure AI found . Vertex- AI Github Agile Problem solving skills System design thinking Observability Mandatory Skills: Generative AI. Experience5-8 Years.
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Total exp - ( 12 to 15 yrs ) Roles & Responsibilities Design and develop innovative AI/ML solutions to address business challenges. Create and implement AI strategies and long-term roadmaps for AI adoption. Develop and deploy AI models, ensuring alignment with business needs. Collaborate with teams to translate business requirements into AI solutions. Stay updated on the latest AI/ML technologies and practices. Mentor junior AI developers and conduct code reviews. Work with data scientists and engineers to preprocess and prepare data. Select suitable AI/ML algorithms and deploy models in production. Ensure data privacy, security, scalability, and performance of AI applications. Requirements . Qualifications Bachelor's or Master's degree in Computer Science, Data Science, or related field. Proficiency in programming languages like Python or Java. Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn). Expertise in data manipulation (Pandas, NumPy) and visualization (Matplotlib, Seaborn). Familiarity with cloud platforms (AWS, Azure, Google Cloud) and version control (Git). Knowledge of databases (SQL, NoSQL) and development tools (Jupyter Notebook, IDEs). Understanding of Agile methodologies (Scrum, Kanban). Nice to Have Proven experience in developing and deploying AI/ML solutions. Skills in NLP, computer vision, and deep learning. Proficiency in cloud tools like EC2, ASG, S3, RDS. Strong problem-solving, communication, and collaboration skills.
Posted 1 month ago
0.0 - 2.0 years
4 - 8 Lacs
Chennai
Remote
We're Hiring: Jr. Algorithm Engineer (Remote | Full-Time / Internship) Are you a recent graduate or an early-career professional passionate about algorithms, Python, and solving real-world problems through code? Were looking for a Jr. Algorithm Engineer to join our team remotely. This is a great opportunity to work on meaningful projects while learning from experienced engineers. What Were Looking For: Strong understanding of Python & C# syntax, file handling, data types Basics of OOP , familiarity with libraries like NumPy, Pandas, Flask, Matplotlib Knowledge of databases (SQLite, PostgreSQL) Comfort with Git/GitHub , and IDEs like Visual Studio or PyCharm Excellent mathematical aptitude , logical reasoning , and algorithmic thinking Eligibility: Science & Tech graduates/postgraduates (excluding business/management backgrounds) 02 years of experience Interns & short-term candidates (min. 4 hours/day) are welcome NOC required for student applicants Engagement: Full-time with an initial assessment period (up to 3 months) Internships available for exceptional learners 100% remote If youre eager to grow, solve problems, and build smart systems — we want to hear from you! To apply, send your resume to hr@vectraglobal.com with the subject: Application – Jr. Algorithm Engineer Let’s build the future together. #Hiring #AlgorithmEngineer #RemoteJobs #PythonJobs #EntryLevelJobs #EngineeringCareers #InternshipOpportunity #CSharp #TechJobs
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Analytics; Data exploration and visualization; trending and forecasting; root cause analysis; user training and support; data presentation and storytelling; measure performance against business metrics and goals; project support Grade - 11 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date What Your Main Responsibilities Are Job Title: Data Analyst Location: Bengaluru Department: Customer & Retail Analytics Employment Type: Full-time About FedEx: FedEx provides customers and businesses worldwide with a broad portfolio of transportation, e-commerce, and business services and also serves our customers through our retail presence. We foster an environment of growth and learning, where innovative ideas are encouraged, and diverse teams are valued for their contributions. As part of our commitment to excellence, we’re seeking a highly skilled Data Analyst to join our analytics team. Job Summary: As a Data Analyst, you will play a key role in gathering, processing, and analyzing data to drive informed decision-making and actionable insights for FedEx. Your quantitative expertise and business acumen will help develop analytical solutions to improve operations, customer experience, and business outcomes. This role involves working closely with cross-functional teams to develop meaningful data analysis, insights summarizations/ visualizations and communicate findings that guide strategy and operational decisions. Key Responsibilities Collect, analyze, and interpret complex data sets using Python and SQL to support business objectives. Collaborate with stakeholders to understand business needs, formulate analytic solutions, and provide actionable insights. Develop and maintain data models and reports to track key performance indicators (KPIs) and business metrics. Create meaningful data visualizations to communicate findings, trends, and actionable insights to non-technical stakeholders. Conduct exploratory data analysis and identify patterns, trends, and opportunities for business improvement. Support data quality initiatives, ensuring accuracy and consistency across data sources. Utilize statistical and quantitative techniques to support problem-solving and business optimization efforts. Mandatory Skills What are we looking for : Python: Proficiency in data manipulation, data analysis libraries (Pandas, NumPy), and data visualization libraries (Matplotlib, Seaborn). SQL: Strong command of SQL for data extraction, transformation, and complex queries. Business Acumen: Ability to understand business context and objectives, aligning analytics with organizational goals. Quantitative Aptitude: Strong analytical and problem-solving skills, with a keen attention to detail. Data Visualization: Basic skills in data visualization to effectively communicate insights. Statistical Analysis: Foundational understanding of statistical methods (e.g., regression, hypothesis testing). Communication Skills: Ability to distill complex data insights into clear, actionable recommendations for stakeholders. Good-to-Have Skills Power BI: Experience with Power BI for data visualization and report development. Machine Learning Fundamentals: Basic knowledge of machine learning concepts for deeper pattern analysis. Advanced Excel: Skills in advanced Excel functions, pivot tables, and data cleaning for quick analyses. Qualifications Bachelor’s degree in Data Science, Statistics, Mathematics, Computer Science, Economics, or a related field. Master’s degree is a plus. 3+ years of experience in data analysis, preferably within the logistics, supply chain, or transportation industry. Excellent communication and interpersonal skills, with the ability to explain complex data insights to stakeholders. Strong organizational skills and a collaborative mindset. Join Our Team: If you are passionate about Data analytics to drive business impact and enhance the customer experience, we invite you to join our team at FedEx. Apply now to be part of a dynamic, innovative environment where your skills and expertise will make a difference. Application Process: To apply for this position, please submit your resume and a cover letter detailing your relevant experience and qualifications. Qualified candidates will be contacted for further evaluation. Analytical Skills; Accuracy & Attention to Detail; Numerical Skills; Planning & Organizing Skills; Presentation Skills; Statistical Knowledge; Data Modeling and Visualization Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.
Posted 1 month ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Potential candidates should have excellent depth and breadth of knowledge in machine learning, data mining, and statistical modeling. They should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Responsibilities Key Roles and Responsibilities of Position: Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making Design and implement data analysis and ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness Qualifications Minimum Qualifications Bachelor’s degree in computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. 2+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Plotly, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or QlikSense. 1+ years' experience in SQL programming language and relational databases. Create visualizations to connect disparate data, find patterns and tell engaging stories. This includes both scientific visualization as well as geographic using software such as Power BI. Preferred Qualifications An MS/PhD in Computer Science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. PhD strongly preferred. Experience working with Google Cloud Platform (GCP) services, leveraging its capabilities for ML model development and deployment. Experience with Git and GitHub for version control and collaboration. Besides Python, familiarity with one more additional programming language (e.g., C/C++/Java) Strong background and understanding of mathematical concepts relating to probabilistic models, conditional probability, numerical methods, linear algebra, neural network under the hood detail. Experience working with large language models such GPT-4, Google, Palm, Llama-2, etc. Excellent problem solving, communication, and data presentation skills.
Posted 1 month ago
2.0 years
2 - 3 Lacs
Cochin
On-site
Job Description A Data Science Trainer is a professional who designs and delivers training programs to educate individuals and teams on data science concepts and techniques. They are responsible for creating and delivering engaging and effective training content that helps learners develop their data science skills. Responsibilities Design and develop training programs and curriculum for data science concepts and techniques Deliver training sessions to individuals and teams, both in-person and online Create and manage training materials such as presentations, tutorials, and exercises Monitor and evaluate the effectiveness of training programs Continuously update training materials and curriculum to reflect the latest trends and best practices in data science Provide one-on-one coaching and mentoring to learners Requirements A degree in a relevant field such as computer science, data science, statistics, or mathematics Strong understanding of data science concepts and techniques Experience with programming languages such as Python, R and SQL Strong presentation and communication skills Experience in training and/or teaching Experience with data visualization tools such as Tableau, Power BI or Matplotlib is a plus Knowledge of data science platform such as Scikit-learn, Tensorflow, Keras etc. is a plus. The role of a data science trainer requires a person who is passionate about teaching, has a solid understanding of data science and has the ability to adapt to the needs of the learners. They must be able to deliver training programs in an engaging and effective way, and must be able to continuously update the training materials to reflect the latest trends and best practices in data science. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹30,000.00 per month Schedule: Day shift Education: Master's (Preferred) Experience: Data scientist: 2 years (Preferred) Work Location: In person Expected Start Date: 01/07/2025
Posted 1 month ago
1.0 years
0 - 1 Lacs
Delhi
On-site
Job Title: Data Science Intern About the Role: We are looking for a motivated and detail-oriented Data Science Intern to join our dynamic team. This internship offers a unique opportunity to work on real-world data problems, gain hands-on experience with cutting-edge tools and technologies, and contribute to impactful projects. Key Responsibilities: Collect, clean, and preprocess large datasets from various sources. Perform exploratory data analysis to uncover insights and patterns. Build, test, and validate predictive models using statistical and machine learning techniques. Assist in developing data visualizations and dashboards to communicate findings. Collaborate with cross-functional teams to understand business needs and deliver data-driven solutions. Document processes, methodologies, and results clearly. Requirements: Pursuing or recently completed a degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. Basic understanding of Python, R, or similar programming languages. Familiarity with data analysis libraries (Pandas, NumPy) and visualization tools (Matplotlib, Seaborn, Power BI, or Tableau). Knowledge of machine learning concepts is a plus. Strong analytical, problem-solving, and communication skills. Ability to work independently and as part of a team. Preferred Qualifications: Experience with SQL or NoSQL databases. Exposure to cloud platforms like AWS, GCP, or Azure. Familiarity with version control (Git/GitHub). Job Type: Internship Contract length: 6 months Pay: ₹8,000.00 - ₹10,000.00 per month Schedule: Day shift Monday to Friday Morning shift Education: Bachelor's (Preferred) Experience: AI : 1 year (Required) ML : 1 year (Required) Work Location: In person
Posted 1 month ago
5.0 years
0 Lacs
Calcutta
Remote
Job Title: Data Science Trainer Location: [Remote/On-site/Hybrid] Job Type: [Full-time / Part-time / Contract / Freelance] Experience Level: [Mid-level / Senior] Reporting To: Head of Training / Program Manager Job Summary: We are seeking a skilled and passionate Data Science Trainer to deliver high-quality instruction in data science, machine learning, and AI. The ideal candidate will have strong theoretical knowledge and hands-on experience in real-world data projects. You will be responsible for designing curriculum, delivering training sessions, and mentoring learners across varying skill levels. Key Responsibilities: Deliver engaging and interactive training sessions in: Python for Data Science Statistics & Probability Machine Learning & Deep Learning Data Visualization (e.g., Tableau, Power BI, Matplotlib, Seaborn) Data Manipulation (Pandas, NumPy) Tools & Platforms (Jupyter, Google Colab, Git, AWS/GCP basics) Design and update training materials, assignments, case studies, and assessments. Provide one-on-one mentoring and guidance to learners. Evaluate learner progress and provide constructive feedback. Keep curriculum updated with the latest industry trends and technologies. Conduct code reviews and support learners with debugging. Participate in webinars, workshops, and online/offline community building. Requirements: Bachelor’s/Master’s degree in Data Science, Computer Science, Statistics, or related field. 5+ years of experience in data science or related domains. Prior experience in teaching or mentoring is a strong advantage. Strong command of Python, machine learning algorithms, and data processing libraries. Excellent communication and presentation skills. Ability to explain complex concepts in a clear and engaging way. Job Type: Full-time Pay: ₹10,915.53 - ₹59,065.16 per month Schedule: Morning shift Work Location: In person
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |