Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
In This Role You Will Be Responsible For Developing, testing, and implementing Python backend service which essentially interacts with ML models. This involves developing solutions to solve complex problems in collaboration with Business and AI team members. Ensure adherence to data privacy and security standards, safeguarding sensitive customer and business information while maintaining compliance with relevant regulations. Demonstrate the ability to work independently and take ownership of tasks, while also supporting team members, ensuring timely high-quality delivery. To be successful in this role, you should meet the following requirements: Minimum experience of 3 years development using Python programming language Proven experience of working with advance coding in Python Must have experience using Pandas and NumPy Familiarity with some of the key Python libraries would be an added advantage, e.g. TensorFlow, LangChain, PyTorch, Plotly, Dash, Scikit learn High degree of understanding on Natural Language Processing (NLP) techniques, Time-Series and Prediction Models would be helpful Excellent communication and collaboration skills to effectively work with cross-functional teams and stakeholders Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: These roles have many overlapping skills with GENAI Engineers and architects. Description may scaleup/scale down based on expected seniority. Roles & Responsibilities: -Implement generative AI models, identify insights that can be used to drive business decisions. Work closely with multi-functional teams to understand business problems, develop hypotheses, and test those hypotheses with data, collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals. -Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services. -Optimizing existing generative AI models for improved performance, scalability, and efficiency. -Ensure data quality and accuracy -Leading the design and development of prompt engineering strategies and techniques to optimize the performance and output of our GenAI models. -Implementing cutting-edge NLP techniques and prompt engineering methodologies to enhance the capabilities and efficiency of our GenAI models. -Determining the most effective prompt generation processes and approaches to drive innovation and excellence in the field of AI technology, collaborating with AI researchers and developers -Experience working with cloud based platforms (example: AWS, Azure or related) -Strong problem-solving and analytical skills -Proficiency in handling various data formats and sources through Omni Channel for Speech and voice applications, part of conversational AI -Prior statistical modelling experience -Demonstrable experience with deep learning algorithms and neural networks -Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. -Contributing to the establishment of best practices and standards for generative AI development within the organization. Professional & Technical Skills: -Must have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. -Must be proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras. -Must have strong knowledge of data structures, algorithms, and software engineering principles. -Must be familiar with cloud-based platforms and services, such as AWS, GCP, or Azure. -Need to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face. -Must be familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly. -Need to have knowledge of software development methodologies, such as Agile or Scrum. -Possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Additional Information: -Must have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Ph.D. is highly desirable. -strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. -You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job title: AI/ML Data Scientist Location: Hyderabad About The Job Transform healthcare through innovation. At Sanofi, we're not just developing treatments—we're pioneering the future of healthcare by harnessing the power of data insights and responsible AI to accelerate breakthrough therapies. As an AI/ML Scientist on our AI and Computational Sciences team, you'll: Drive innovation that directly impacts patient outcomes Collaborate with world-class scientists to solve complex healthcare challenges Apply advanced AI techniques to increase drug development success rates Shape the responsible use of AI in life-saving medical research Be part of a mission that matters. Help us transform data into life-changing treatments and join a team where your expertise can make a meaningful difference in patients' lives. Our Team The AI and Computational Sciences team is a key team within R&D Digital, focused on image, omics, wearable sensor data, and clinical data analytics. This team plays a critical role in bridging the gap between general purposed digital products and specific project needs. We are looking for a skilled AI/ML Data Scientist to join our elite AI and Computational Sciences team and harness cutting-edge AI to revolutionize healthcare. As a key player within R&D Digital, you'll transform complex data into life-changing medical breakthroughs. Impact You'll Make Drive Innovation Across Multiple High-impact Domains Precision Medicine: Develop patient response prediction models that personalize treatments Advanced Omics Analysis: Pioneer cell type and cell stage quantification techniques Advanced Image/Video Analysis: Lead application of state-of-art computer vision methods for gaining unprecedented insights about drug efficacy from medical images/videos Digital Health: Design novel biomarkers from wearable sensor data Biological Insights: Create enzyme property prediction algorithms and conduct disease pathway analyses Your Growth Journey Technical Mastery: Develop expertise across image analysis, time series modeling, GenAI, AI Agents, and explainable AI Scientific Impact: Publish in top-tier AI/ML journals and secure patents that protect groundbreaking innovations Global Influence: Deploy solutions that impact patients worldwide Your Environment Elite Team: Work alongside AI/ML experts and drug development experts in an agile, high-performance environment Cutting-Edge Resources: Access Sanofi's state-of-the-art cloud infrastructure and data platforms Continuous Learning: Receive mentorship and training opportunities to sharpen your leadership and AI/ML skills Join Our AI-First Vision Be part of Sanofi's bold transformation into an AI-first organization where you'll: Develop your skills through world-class mentorship and training Chase the miracles of science to improve people's lives Ready to transform healthcare through the power of AI? Main Responsibilities Research Phase Excellence Design and implement AI models for target identification and validation using multi-omics data (genomics, proteomics, transcriptomics) Develop predictive algorithms to molecular design for compound selection and accelerate lead optimization Create computer vision systems for high-throughput screening image analysis and cellular phenotyping Clinical Development Innovation Engineer digital biomarkers from wearable sensors and mobile devices to enable objective, continuous patient monitoring Implement advanced time-series analysis of real-time patient data to detect early efficacy signals Design AI-powered patient stratification models to identify responder populations and optimize trial design Multi-Modal Data Integration Architect systems that harmonize diverse data types (imaging, omics, clinical, text, sensor) into unified analytical frameworks Develop novel feature extraction techniques across modalities to enhance predictive power Create visualization tools that present complex multi-modal insights to clinical teams Scientific Impact Collaborate with cross-functional teams to translate AI insights into actionable drug development strategies Present findings to scientific and business stakeholders with varying technical backgrounds Publish innovative methodologies in top-tier scientific and AI/ML journals Contribute to patent applications to protect novel AI/ML approaches About You Experience : 3 to 5 years of experience in AI/ML and computational model development on multimodal data like omics, biomedical imaging, text and clinical trials data Key Functional Requirement Demonstrated track record of successful AI/ML project implementation 3-5 years of experience in computational modeling or AI/ML algorithm development, or any other related field Deep understanding and proven track record of developing model training pipelines and workflows Excellent communication and collaboration skills Working knowledge and comfort working with Agile methodologies Technical Skills Programming Proficiency: Advanced Python skills with experience in ML frameworks (PyTorch, TensorFlow, JAX) Machine Learning: Deep expertise in supervised, unsupervised, and reinforcement learning algorithms Drug discovery: molecular design, docking, binding site prediction, mRNA vaccine design, ADMET property, protein structure prediction, molecular dynamics simulation Deep Learning: Experience designing and implementing neural network architectures (CNNs, RNNs, Transformers) Computer Vision: Proficiency in image processing, segmentation, and object detection techniques (SAM, ViT, Diffusion Models, MediaPipe, MMPose, MonoDepth, VoxelNet, SlowFast, C3D) Natural Language Processing: Experience with large language models, text mining, and information extraction (OpenAI, Claude, Llama, Qwen, Deepseek model series) Time Series Analysis: Expertise in analyzing temporal data from sensors and wearable devices (HAR foundation models, compliance detection models) Omics Analysis: Knowledge of computational methods for protein genomics, proteomics, or transcriptomics data Cloud Computing: Experience deploying ML models on cloud platforms (AWS) Tools And Technologies Data Processing: Experience with data pipelines and ETL processes Version Control: Proficiency with Git and collaborative development workflows, Docker MLOps: Experience with model deployment, monitoring, and maintenance Visualization: Ability to create compelling data visualizations (Matplotlib, Seaborn, Plotly) Experiment Tracking: Familiarity with tools like MLflow, Weights & Biases, or similar platforms Soft Skills Strong scientific communication abilities for technical and non-technical audiences Collaborative mindset for cross-functional team environments Problem-solving approach with ability to translate business needs into technical solutions Self-motivated with capacity to work independently and drive projects forward Education : PhD/MS/BE/BTech/ME/MTech in Computer Science and Engineering, AI/ML, other relevant engineering discipline, Computational Biology, Data Science, Bioinformatics or related fields (with equivalent experience) Preferred : Publications or public github Languages : English Why Choose us? Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Opportunity to work in an international environment, collaborating with diverse business teams and vendors, working in a dynamic team, and fully empowered to propose and implement innovative ideas. Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Relocation Assistance Offered Within Country Job Number #163961 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specializing in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. About Colgate-Palmolive Do you want to come to work with a smile and leave with one as well? In between those smiles, your day consists of working in a global organization, continually learning and collaborating, having stimulating discussions, and making impactful contributions! If this is how you see your career, Colgate is the place to be! Our diligent household brands, dedicated employees, and sustainability commitments make us a company passionate about building a future to smile about for our employees, consumers, and surrounding communities. The pride in our brand fuels a workplace that encourages creative thinking, champions experimentation, and promotes authenticity which has contributed to our enduring success. If you want to work for a company that lives by their values, then give your career a reason to smile...every single day. The Experience In today’s dynamic analytical / technological environment, it is an exciting time to be a part of the GLOBAL ANALYTICS team at Colgate. Our highly insight driven and innovative team is dedicated to driving growth for Colgate Palmolive in this constantly evolving landscape. What role will you play as a member of Colgate's Analytics team? The GLOBAL DATA SCIENCE & ADVANCED ANALYTICS vertical in Colgate Palmolive is focused on working on cases which have big $ impact and scope for scalability. With clear focus on addressing the business questions, with recommended actions The Data Scientist position would lead GLOBAL DATA SCIENCE & ADVANCED ANALYTICS projects within the Analytics Continuum. Conceptualizes and builds predictive modelling, simulations, and optimization solutions for clear $ objectives and measured value The Data Scientist would work on a range of projects ranging across Revenue Growth Management, Market Effectiveness, Forecasting etc. Data Scientist needs to handle relationships independently with Business and to drive projects such as Price Promotion, Marketing Mix and Forecasting Who are you…? You are a function expert - Leads GLOBAL DATA SCIENCE & ADVANCED ANALYTICS within the Analytics Continuum Conceptualizes and builds predictive modelling, simulations, and optimization solutions to address business questions or use cases Applies ML and AI to analytics algorithms to build inferential and predictive models allowing for scalable solutions to be deployed across the business Conducts model validations and continuous improvement of the algorithms, capabilities, or solutions built Deploys models using Airflow, Docker on Google Cloud Platforms Develops end to end business solutions from data extraction, data preparation, data mining to statistical modeling and then building business presentations Own Pricing and Promotion, Marketing Mix, Forecasting study from scoping to delivery Study large amounts of data to discover trends and patterns Mine data through various technologies like BigQuery and SQL Present insights in an easy to interpret way to the business teams Develop visualization (e.g. Looker, PyDash, Flask, PlotLy) using large datasets Ready to work closely with business partners across geographies You connect the dots - Merge multiple data sources and build Statistical Models / Machine Learning models in Price and Promo Elasticity Modelling, Marketing Mix Modelling to derive actionable business insights and recommendation Assemble large, sophisticated data sets that meet functional / non-functional business requirements Build data and visualization tools for Business analytics to assist them in decision making You are a collaborator - Work closely with Division Analytics team leads Work with data and analytics specialists across functions to drive data solutions You are an innovator - Identify, design, and implement new algorithms, process improvements: while continuously automating processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Qualifications What you’ll need BE/BTECH [ Computer Science, Information Technology is preferred], MBA or PGDM in Business Analytics / Data Science, Additional DS Certifications or Courses, MSC / MSTAT in Economics or Statistics 5+ years of experience in building data models and driving insights Hands-on/experience on developing statistical models, such as linear regression, ridge regression, lasso, random forest, SVM, gradient boosting, logistic regression, K-Means Clustering, Hierarchical Clustering, Bayesian Regression etc. Hands on experience on coding languages Python(mandatory), R, SQL, PySpark, SparkR Good Understanding of Cloud Frameworks Google Cloud, Snowflake and services like Kubernetes, Cloud Build, Cloud Run. Knowledge of using GitHub, Airflow for coding and model executions and model deployment on cloud platforms Solid understanding on tools like Looker, Domo, Power BI and web apps framework using plotly, pydash, sql Experience front facing Business teams (Client facing role) supporting and working with multi-functional teams in a dynamic environment What You’ll Need…(Preferred) Handling, redefining, and developing statistical models for RGM/Pricing and/or Marketing Effectiveness Experience with third-party data i.e., syndicated market data, Point of Sales, etc. Working knowledge of consumer-packaged goods industry Knowledge of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Experience visualizing/communicating data for partners using: Tableau, DOMO, pydash, plotly, d3.js, ggplot2, pydash, R Shiny etc Willingness and ability to experiment with new tools and techniques Ability to maintain personal composure and thoughtfully handle difficult situations. Knowledge of Google products (Big Query, data studio, colab, Google Slides, Google Sheets etc) Knowledge of deployment of models in Cloud Environment using Airflow, Docker Ability to work with cross functional teams in IT, Data Architecture to build enterprise level Data Science products. Our Commitment to Diversity, Equity & Inclusion Achieving our purpose starts with our people — ensuring our workforce represents the people and communities we serve —and creating an environment where our people feel they belong; where we can be our authentic selves, feel treated with respect and have the support of leadership to impact the business in a meaningful way. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
India
On-site
R Shiny Web Developer Job Summary: We are seeking a skilled and results-driven R Shiny Web Developer with 4–5 years of experience in developing interactive web applications using R and Shiny. The ideal candidate will be responsible for building, deploying, and maintaining scalable dashboards and data-driven applications to support business intelligence and analytical needs. Key Responsibilities: Design, develop, and maintain interactive web applications using R Shiny. Collaborate with data scientists and analysts to understand requirements and translate them into user-friendly applications. Optimize Shiny applications for performance and scalability. Integrate APIs and external data sources into Shiny apps. Implement data visualization best practices using ggplot2, plotly, and other R libraries. Ensure security, accessibility, and cross-browser compatibility of applications. Write clean, efficient, and well-documented R code. Troubleshoot, debug, and upgrade existing applications. Work in an agile environment and contribute to sprint planning, reviews, and stand-ups. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Statistics, Data Science, or a related field. 4–5 years of hands-on experience in R and Shiny application development. Proficient in R packages like dplyr, tidyr, data.table, ggplot2, shinyjs, etc. Experience with HTML, CSS, and JavaScript for front-end customization. Solid understanding of reactive programming in Shiny. Experience with version control tools like Git. Ability to work with large datasets and optimize data handling in R. Strong problem-solving skills and attention to detail. Good communication and documentation skills. Preferred Skills (Nice to Have): Experience with Docker, AWS, or other cloud platforms. Familiarity with SQL, Python, or Tableau. Knowledge of deploying Shiny apps using Shiny Server or RStudio Connect. Understanding of RESTful APIs and integration techniques. If Intrested. Please submit your CV to Khushboo@Sourcebae.com or share it via WhatsApp at 8827565832 Stay updated with our latest job opportunities and company news by following us on LinkedIn: :https://www.linkedin.com/company/sourcebae Show more Show less
Posted 2 weeks ago
0 years
6 - 9 Lacs
Gurgaon
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Associate Analyst, R Programmer-1 Overview The Mastercard Economics Institute (MEI) is an economics lab powering scale at Mastercard by owning economic thought leadership in support of Mastercard’s efforts to build a more inclusive and sustainable digital economy The Economics Institute was launched in 2020 to analyze economic trends through the lens of the consumer to deliver tailored and actionable insights on economic issues for customers, partners and policymakers The Institute is composed of a team of economists and data scientists that utilize & synthesize the anonymized and aggregated data from the Mastercard network together with public data to bring powerful insights to life, in the form of 1:1 presentation, global thought leadership, media participation, and commercial work through the company’s product suites About the Role We are looking for an R programmer to join Mastercard’s Economics Institute, reporting to the team lead for Economics Technology. An individual who will: create clear, compelling data visualisations that communicate economic insights to diverse audiences develop reusable R functions and packages to support analysis and automation create and format analytical content using R Markdown and/or Quarto design and build scalable Shiny apps develop interactive visualisations using JavaScript charting libraries (e.g. Plotly, Highcharts, D3.js) or front-end frameworks (e.g. React, Angular, Vue.js)work with databases and data platforms (eg. SQL, Hadoop) write clear, well-documented code that others can understand and maintain collaborate using Git for version control All About You proficient in R and the RStudio IDE proficient in R packages like dplyr for data cleaning, transformation, and aggregation familiarity with dependency management and documentation in R (e.g. roxygen2) familiar with version control concepts and tools (e.g. Git, GitHub, Bitbucket) for collaborative development experience writing SQL and working with relational databases creative and passionate about data, coding, and technology strong collaborator who can also work independently organized and able to prioritise work across multiple projects comfortable working with engineers, product owners, data scientists, economists Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 2 weeks ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
E2M is not your regular digital marketing firm. We're an equal opportunity provider, founded on strong business ethics and driven by more than 300 experienced professionals. Our client base is made of digital agencies that need help with solving their bandwidth problems, cutting overheads, and increasing profitability. We need diligent professionals like you to help us help them. If you're someone who dreams big and has the gumption to make them come true, E2M has a place for you. 1. Program Overview Title : Fractional AI Consulting Internship (Paid) Duration : 3–6 months Location : On-Site (Ahmedabad) Objective : Provide students with real-world AI project experience, mentorship, and an opportunity to join our team full-time upon successful completion. Our internship program at E2M Solutions is designed to bridge the gap between academic knowledge and practical industry application in AI. Interns will work on real client projects, learn directly from our AI consultants, and gain exposure to how AI solutions are deployed in digital marketing and consulting services. 2. Selection Criteria Education Background Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or related fields. Fundamental coursework in AI, machine learning, or data analytics (at least one foundational course completed or in progress). Technical Skills Programming : Basic proficiency in Python (preferred), R, or another relevant language. Data Handling : Understanding of basic statistics, data cleaning, and data manipulation (e.g., familiarity with libraries like NumPy, Pandas in Python). Machine Learning Basics : Familiarity with concepts like regression, classification, model training, and evaluation metrics. Soft Skills & Mindset Curiosity & Willingness to Learn : Students should be eager to explore new AI tools, techniques, and applications. Collaboration : Ability to work in teams, communicate effectively, and follow project guidelines. Problem-Solving : Demonstrate initiative in identifying challenges and brainstorming solutions. Bonus Skills (Not Mandatory, but Nice to Have) Exposure to Deep Learning frameworks (TensorFlow or PyTorch). Experience with version control (GitHub). Familiarity with data visualization tools (Matplotlib, Plotly, etc.). Any prior project experience, even if it’s a university capstone or personal project. 3. Why Students Should Join Our Internship Program Industry-Ready Experience We go beyond academic exercises and provide real AI-driven project work. Interns learn how AI solutions are proposed, developed, and integrated for clients in diverse industries. Hands-on use of current AI tools and frameworks. Mentorship from Experts Interns will be guided by our experienced AI consultants who regularly work with digital agencies, helping them understand the nuances of client-focused AI services. Regular check-ins, one-on-one sessions, and skill-building workshops are part of the internship program. Practical Exposure to Digital Agency Projects Interns get to see how AI is applied in digital marketing, content optimization, ad tech, customer analytics, and more. This exposure is excellent for anyone aiming to launch a career in AI or data science within a marketing/consulting environment. Paid Internship & Employment Opportunity Students receive a stipend to support their efforts and contributions. Outstanding interns may be offered a full-time position at the end of the internship. Networking & Professional Development Opportunity to interact with leaders in AI consulting, digital marketing, and tech. Workshops on resume building, interview preparation, and career growth within the tech sector. 4. Internship Structure & Responsibilities Orientation & Training (Week 1) Overview of E2M Solutions’ AI consulting approach. Tool and framework introduction (e.g., Git, n8n, Airtables, Jupyter Notebooks). Best practices in project and data management. Hands-On Projects (Weeks 2–10) Interns are assigned to real projects under the guidance of a project lead or mentor. Tasks may include data collection & cleaning, exploratory data analysis, model building, performance tuning, and final reporting. Progress Reviews (Ongoing) Weekly check-ins with mentors and monthly performance evaluations to ensure continuous learning and alignment with project goals. Final Presentation & Assessment (Last Week) Interns present their project findings and demonstrate the solution they helped build. Feedback and assessment sessions determine potential offers for full-time roles. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Data Scientist – Roles And Responsibilities We are seeking a skilled Data Scientist to join our team and leverage data to create actionable insights and innovative solutions. The ideal candidate will have strong analytical skills, expertise in statistical modeling, and proficiency in programming and machine learning techniques. You will work closely with cross-functional teams to identify business opportunities, optimize processes, and develop data-driven strategies. Key Responsibilities Data Collection & Preparation: Gather, clean, and preprocess large datasets from various sources to ensure data quality and usability. Exploratory Data Analysis: Perform in-depth analysis to identify trends, patterns, and correlations that inform business decisions. Model Development : Design, build, and deploy machine learning models and statistical algorithms to solve complex problems, such as predictive analytics, classification, or recommendation systems. Data Visualization : Create compelling visualizations and dashboards to communicate insights to stakeholders using tools like Tableau, Power BI, or Python libraries (e.g., Matplotlib, Seaborn). Collaboration : Work with team leads, engineers, and business leaders to understand requirements, define key metrics, and translate insights into actionable strategies. Experimentation : Design and analyze A/B tests or other experiments to evaluate the impact of business initiatives. Automation : Develop pipelines and scripts to automate data processing and model deployment. Keep up with advancements in data science, machine learning, and industry trends to implement cutting-edge techniques. Preferred Qualifications Experience with deep learning, natural language processing (NLP), or computer vision. Knowledge of software engineering practices, such as version control (Git) and CI/CD pipelines. Contributions to open-source projects or publications in data science. Technical Skills Proficiency in programming languages like Python Experience with SQL for querying and managing databases. Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Familiarity with big data tools (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure) is a plus. Experience with data visualization tools (e.g., Tableau, Power BI, or Plotly). Strong understanding of statistics, probability, and experimental design Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Data Scientist – Roles And Responsibilities We are seeking a skilled Data Scientist to join our team and leverage data to create actionable insights and innovative solutions. The ideal candidate will have strong analytical skills, expertise in statistical modeling, and proficiency in programming and machine learning techniques. You will work closely with cross-functional teams to identify business opportunities, optimize processes, and develop data-driven strategies. Key Responsibilities Data Collection & Preparation: Gather, clean, and preprocess large datasets from various sources to ensure data quality and usability. Exploratory Data Analysis: Perform in-depth analysis to identify trends, patterns, and correlations that inform business decisions. Model Development : Design, build, and deploy machine learning models and statistical algorithms to solve complex problems, such as predictive analytics, classification, or recommendation systems. Data Visualization : Create compelling visualizations and dashboards to communicate insights to stakeholders using tools like Tableau, Power BI, or Python libraries (e.g., Matplotlib, Seaborn). Collaboration : Work with team leads, engineers, and business leaders to understand requirements, define key metrics, and translate insights into actionable strategies. Experimentation : Design and analyze A/B tests or other experiments to evaluate the impact of business initiatives. Automation : Develop pipelines and scripts to automate data processing and model deployment. Keep up with advancements in data science, machine learning, and industry trends to implement cutting-edge techniques. Preferred Qualifications Experience with deep learning, natural language processing (NLP), or computer vision. Knowledge of software engineering practices, such as version control (Git) and CI/CD pipelines. Contributions to open-source projects or publications in data science. Technical Skills Proficiency in programming languages like Python Experience with SQL for querying and managing databases. Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Familiarity with big data tools (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure) is a plus. Experience with data visualization tools (e.g., Tableau, Power BI, or Plotly). Strong understanding of statistics, probability, and experimental design Show more Show less
Posted 2 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will create and develop data lake solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance data engineering solutions for large scientific datasets and collaborate with Research collaborators. You will also provide technical leadership to junior team members. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates deep technical skills, is proficient with big data technologies, and has a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Lead, manage, and mentor a high-performing team of data engineers Design, develop, and implement data pipelines, ETL processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Doctorate Degree OR Masters degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications: 3+ years of experience in implementing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Able to engage with business collaborators and mentor team to develop data pipelines and data models Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Good understanding of data modeling, data warehousing, and data integration concepts Good experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence Understanding of data governance frameworks, tools, and best practices Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 2 weeks ago
1.5 - 6.5 years
0 Lacs
Hyderabad, Telangana, India
On-site
POSITION SUMMARY Zoetis, Inc. is the world's largest producer of medicine and vaccinations for pets and livestock. The Zoetis Tech & Digital (ZTD) Global ERP organization is as a key building block of ZTD comprising of enterprise applications and systems platforms. Join us at Zoetis India Capability Center (ZICC) in Hyderabad, where innovation meets excellence. As part of the world's leading animal healthcare company, ZICC is at the forefront of driving transformative advancements and applying technology to solve the most complex problems. Our mission is to ensure sustainable growth and maintain a competitive edge for Zoetis globally by leveraging the exceptional talent in India. At ZICC, you'll be part of a dynamic team that partners with colleagues worldwide, embodying the true spirit of One Zoetis. Together, we ensure seamless integration and collaboration, fostering an environment where your contributions can make a real impact. Be a part of our journey to pioneer innovation and drive the future of animal healthcare. Responsibilities: * Data Analysis and Interpretation o Perform exploratory and advanced data analysis using Python, SQL, and relevant statistical techniques. o Identify trends, patterns, and actionable insights to support business decisions. o Cleanse, transform, and validate large datasets from diverse sources, including Azure-based platforms. * Data Visualization o Design and build clear, interactive dashboards and visual reports using Excels, Power BI, Tableau, or similar tools. o Translate complex datasets into easy-to-understand visual narratives for stakeholders. o Ensure visualizations effectively highlight key metrics and business drivers. * Problem-Solving and Attention to Detail o Apply strong analytical thinking to identify anomalies and resolve data inconsistencies. o Maintain accuracy and completeness in data reporting, adhering to defined SLA timelines. Provide ongoing support and troubleshooting for business users and stakeholders. * Deployment and Maintenance o Deploy reports and dashboards in secure and scalable environments, including Azure services (e.g., Azure Synapse, Azure Data Factory). o Monitor performance and data refresh processes, ensuring reliability and efficiency. o Implement feedback-based enhancements and maintain documentation for data products. * Collaboration o Collaborate cross-functionally with product teams, data engineers, and business users to align on data needs and outcomes. o Participate in data reviews and contribute to shared standards and best practices. o Communicate findings clearly and effectively, both verbally and in writing. * Continuous Learning and Innovation o Stay current with advancements in data analytics, cloud technologies, and BI tools. o Pursue ongoing learning and certifications to deepen technical expertise. o Explore and pilot new tools, methodologies, or frameworks to improve data processes. POSITION RESPONSIBILITIES Percent of Time Design, develop, deploy, and support Data solutions. 60% Code reviews 20% Cross-Team Collaboration and Learning New Technologies to stay-up to date. 10% Global Manufacturing Supply process understanding like production planning, quality, inventory, and supply chain. MES (execution system) understanding, and SAP-ERP landscape. 10% ORGANIZATIONAL RELATIONSHIPS * Interacting with business stakeholders to gather integration requirements, understand business processes, and ensure that integration solutions align with organizational goals and objectives. * Work with implementation partners who may be responsible for deploying, configuring, or maintaining integrated solutions within Zoetis IT landscape. * Coordinate with developers and other members of the team to implement integration solutions, share knowledge, and address technical challenges. EDUCATION AND EXPERIENCE Education: Bachelors/master's degree in computer science/applications. Experience: * 1.5-6.5 years of overall experience in data analysis/science and business intelligence. * Solid knowledge of SQL and Python for data analysis, transformation, and automation. * Strong analytical mindset with excellent communication skills and a proactive, problem-solving attitude. * Familiarity with CI/CD processes for automating report deployments and data workflows. * Experience using Git for version control and collaborative development. * Understanding of API integration to extract, manipulate, or serve data from cloud platforms or databases like Azure Data Lake and PostgreSQL. * Knowledge of data visualization best practices and libraries (e.g., matplotlib, seaborn, Plotly) is a plus. Proficiency in Power BI is required; experience with Tableau is a strong advantage. TECHNICAL SKILLS REQUIREMENTS * Python, R, ruby, SQL, CI/CD, Data Viz., Power-BI PHYSICAL POSITION REQUIREMENTS Regular working hours are from 11 AM to 8:00 PM IST. Sometimes, more overlap with the EST Time zone is required during production go-live. This description indicates the general nature and level of work expected. It is not designed to cover or contain a comprehensive listing of activities or responsibilities required of the incumbent. Incumbent may be asked to perform other duties as required. Additional position specific requirements/responsibilities are contained in approved training curricula. About Zoetis At Zoetis , our purpose is to nurture the world and humankind by advancing care for animals. As a Fortune 500 company and the world leader in animal health, we discover, develop, manufacture and commercialize vaccines, medicines, diagnostics and other technologies for companion animals and livestock. We know our people drive our success. Our award-winning culture, built around our Core Beliefs, focuses on our colleagues' careers, connection and support. We offer competitive healthcare and retirement savings benefits, along with an array of benefits, policies and programs to support employee well-being in every sense, from health and financial wellness to family and lifestyle resources. Global Job Applicant Privacy Notice Show more Show less
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Job Title: AI/ML Developer – (Intern) Company: VASPP Technologies Pvt. Ltd. Location: Bengaluru, Karnataka, India Job Type: Full-Time Experience: Fresher (0–1 year) Department: Technology / Development About VASPP Technologies: VASPP Technologies Pvt. Ltd. is a fast-growing software company focused on delivering cutting-edge digital transformation solutions for global enterprises. Our innovative projects span across AI/ML, data analytics, enterprise solutions, and cloud computing. We foster a collaborative and dynamic environment that encourages learning and growth. Job Summary: We are seeking a motivated and enthusiastic AI/ML Developer – Fresher to join our growing technology team. The ideal candidate will have a foundational understanding of machine learning algorithms, data analysis, and model deployment. You will work closely with senior developers to contribute to real-world AI/ML projects and software applications. Responsibilities: ·Assist in the design, development, training, and deployment of AI and machine learning models. Collaborate with cross-functional teams including software engineers, data scientists, and product managers to build intelligent applications. Perform data collection, cleaning, transformation, and exploratory data analysis (EDA). Test various ML algorithms (e.g., classification, regression, clustering) and optimize them for performance. Implement model evaluation metrics and fine-tune hyperparameters. Contribute to integrating ML models into software applications using REST APIs or embedded services. Stay updated with the latest AI/ML frameworks, research papers, and industry trends. Document all work including model development, experiments, and deployment steps in a structured format. Required Skills: Proficiency in Python and libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, or PyTorch. Solid understanding of machine learning principles: supervised/unsupervised learning, overfitting, cross-validation, etc. Familiarity with data visualization tools: Matplotlib, Seaborn, Plotly. Basic knowledge of SQL and working with relational databases. Good understanding of software development basics, version control (Git), and collaborative tools. Strong problem-solving mindset, eagerness to learn, and ability to work in a team environment. Educational Qualification: Bachelor’s degree in Computer Science , Information Technology , Data Science , Artificial Intelligence , or related fields from a recognized institution. Preferred Qualifications (Optional): Internship or academic projects related to AI/ML. Participation in online competitions (e.g., Kaggle, DrivenData) or open-source contributions. Exposure to cloud platforms like AWS, Google Cloud (GCP), or Microsoft Azure. Familiarity with model deployment techniques using Flask/FastAPI, Docker, or Streamlit. Compensation: CTC/ Stipend: 5000 or 8000 rs per month How to Apply: Send your updated resume and portfolio to: Email: piyush.vs@vaspp.com or aparna.bs@vaspp.com Job Type: Internship Contract length: 2 months Pay: ₹5,000.00 - ₹8,000.00 per month Benefits: Paid sick time Work from home Schedule: Monday to Friday Morning shift Application Question(s): This is an 2 month Internship and the stipend will be based on performance and interview process so, is it okay for you ? Education: Bachelor's (Preferred) Experience: AI: 1 year (Preferred) Language: English (Preferred) Location: Bangalore, Karnataka (Required) Work Location: In person Application Deadline: 14/06/2025
Posted 2 weeks ago
0.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Role: Senior Analyst - Data Engineering Experience: 3 to 6 years Location: Bengaluru, Karnataka , India (BLR) Job Descrition: We are seeking a highly experienced and skilled Senior Data Engineer to join our dynamic team. This role requires hands-on experience with databases such as Snowflake and Teradata, as well as advanced knowledge in various data science and AI techniques. The successful candidate will play a pivotal role in driving data-driven decision-making and innovation within our organization. Job Resonsbilities: Design, develop, and implement advanced machine learning models to solve complex business problems. Apply AI techniques and generative AI models to enhance data analysis and predictive capabilities. Utilize Tableau and other visualization tools to create insightful and actionable dashboards for stakeholders. Manage and optimize large datasets using Snowflake and Teradata databases. Collaborate with cross-functional teams to understand business needs and translate them into analytical solutions. Stay updated with the latest advancements in data science, machine learning, and AI technologies. Mentor and guide junior data scientists, fostering a culture of continuous learning and development. Communicate complex analytical concepts and results to non-technical stakeholders effectively. Key Technologies & Skills: Machine Learning Models: Supervised learning, unsupervised learning, reinforcement learning, deep learning, neural networks, decision trees, random forests, support vector machines (SVM), clustering algorithms, etc. AI Techniques: Natural language processing (NLP), computer vision, generative adversarial networks (GANs), transfer learning, etc. Visualization Tools: Tableau, Power BI, Matplotlib, Seaborn, Plotly, etc. Databases: Snowflake, Teradata, SQL, NoSQL databases. Programming Languages: Python (essential), R, SQL. Python Libraries: TensorFlow, PyTorch, scikit-learn, pandas, NumPy, Keras, SciPy, etc. Data Processing: ETL processes, data warehousing, data lakes. Cloud Platforms: AWS, Azure, Google Cloud Platform. Big Data Technologies: Apache Spark, Hadoop. Job Snapshot Updated Date 11-06-2025 Job ID J_3679 Location Bengaluru, Karnataka, India Experience 3 - 6 Years Employee Type Permanent
Posted 2 weeks ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Company Description UsefulBI Corporation provides comprehensive solutions across Data Engineering, Data Science, AI/ML, and Business Intelligence. The company's mission is to empower astute business decisions through integrating data insights and cutting-edge AI. UsefulBI excels in data architecture, cloud strategies, Business Intelligence, and Generative AI to deliver outcomes that surpass individual capabilities. Role Description We are seeking a skilled R and Python Developer with hands-on experience developing and deploying applications using Posit (formerly RStudio) tools, including Shiny Server, Posit Connect, and R Markdown. The ideal candidate will have a strong background in data analysis, application development, and creating interactive dashboards for data-driven decision-making. Key Responsibilities Design, develop, and deploy interactive web applications using R Shiny and Posit Connect. Write clean, efficient, and modular code in R and Python for data processing and analysis. Build and maintain R Markdown reports and Python notebooks for business reporting. Integrate R and Python scripts for advanced analytics and automation workflows. Collaborate with data scientists, analysts, and business users to gather requirements and deliver scalable solutions. Troubleshoot application issues and optimize performance on Posit platform (RStudio Server, Posit Connect). Work with APIs, databases (SQL, NoSQL), and cloud platforms (e.g., AWS, Azure) as part of application development. Ensure version control using Git and CI/CD for application deployment. Required Qualifications 4+ years of development experience using R and Python. Strong experience with Shiny apps, R Markdown, and Posit Connect. Proficient in using packages like dplyr, ggplot2, plotly, reticulate, and shiny. Experience with Python data stack (pandas, numpy, matplotlib, etc.) Hands-on experience with deploying apps on Posit Server / Connect. Familiarity with Git, Docker, and CI/CD tools. Excellent problem-solving and communication skills. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Description React JS Developer Lead II - Software Engineering Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day to day project execution. Outcomes Interpret the application feature and component designs to develop the same in accordance with specifications. Code debug test document and communicate product component and feature development stages. Validate results with user representatives integrating and commissions the overall solution. Select and create appropriate technical options for development such as reusing improving or reconfiguration of existing components while creating own solutions for new contexts Optimises efficiency cost and quality. Influence and improve customer satisfaction Influence and improve employee engagement within the project teams Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues Percent of voluntary attrition On time completion of mandatory compliance trainings Code Outputs Expected: Code as per the design Define coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation Requirements test cases and results Configure Define and govern configuration management plan Ensure compliance from the team Test Review/Create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise software developers on design and development of features and components with deeper understanding of the business problem being addressed for the client Learn more about the customer domain and identify opportunities to provide value addition to customers Complete relevant domain certifications Manage Project Support Project Manager with inputs for the projects Manage delivery of modules Manage complex user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort and size estimation and plan resources for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for applications features business components and data models Interface With Customer Clarify requirements and provide guidance to Development Team Present design options to customers Conduct product demos Work closely with customer architects for finalizing design Manage Team Set FAST goals and provide feedback Understand aspirations of the team members and provide guidance opportunities etc Ensure team members are upskilled Ensure team is engaged in project Proactively identify attrition risks and work with BSE on retention measures Certifications Obtain relevant domain and technology certifications Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort resources required for developing / debugging features / components Perform and evaluate test in the customer or target environments Make quick decisions on technical/project related challenges Manage a team mentor and handle people related issues in team Have the ability to maintain high motivation levels and positive dynamics within the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback for team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers and answer customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning handling multiple tasks. Build confidence with customers by meeting the deliverables timely with a quality product. Estimate time and effort of resources required for developing / debugging features / components Knowledge Examples Appropriate software programs / modules Functional & technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Broad knowledge of customer domain and deep knowledge of sub domain where problem is solved Additional Comments Who we are: At UST, we help the world’s best organizations grow and succeed through transformation. Bringing together the right talent, tools, and ideas, we work with our client to co-create lasting change. Together, with over 30,000 employees in over 25 countries, we build for boundless impact—touching billions of lives in the process. Visit us at UST.com. Key Responsibilities Understand the key requirements to augment the system and application architecture as needed. Be a team player and interact with different stakeholders as required. Quickly learn new skills required to perform the job role effectively. Provide accurate estimate on the work items and effectively communicate any bottle necks on time. Deliver the assigned work items on schedule. Follow coding standards and guidelines set by the team and write secure, reliable, testable & readable code. Participate in technical discussions with software development team. Participate in planning, design, development, and implementation of multiple initiatives. Develop applications following agile software development methodologies and principles. Essential skills 5-8 Years of Professional Front-end development experience with minimum 3 years of recent hands-on experience on React JS. Good Experience and knowledge on React component libraries (E.g.: Bootstrap, Material UI, etc.) Good experience in CSS toolkits like SASS, SCSS or Styled components and BEM Guidelines for CSS. Experience with React performance testing, performance optimization and debugging (React profiler, server-side rendering, code splitting/lazy loading) Strong experience in HTML, CSS, and JavaScript. Strong knowledge in Data structures and Algorithms Strong understanding on SQL and NoSQL Databases (E.g.: Mongo DB, MS SQL Server, etc.) Proficient in Software development design patterns (E.g.: Singleton, Factory, etc.) Experience in Miro-frontend development using Module Federation plugin or Similar (E.g.: Single SPA) Experience in building dynamic visualizations using charting libraries like D3.js, Plotly JS or similar (E.g.: High charts, Chart JS, etc.) Strong Analytical and Problem-Solving skills. Good in using IDEs like VS Code or Jet Brains WebStorm/PyCharm/Rider Experience using version control systems. (e.g., Git) Experience with Frontend dev tools like Webpack, Vite, Prettier, ESlint, Rollup, Babel, etc. Desired skills Experience in Other JavaScript Frameworks is an added advantage (e.g.: Vue JS, Angular, Node JS, etc.) Good understanding on Data Grids and other relevant component libraries (E.g.: AG Grid, Handson table) Hands-on experience testing, debugging, and troubleshooting REST APIs implemented using Python Fast API or .Net Core WebAPI. Familiarity with Data science and ML frameworks Data caching and related technologies (E.g.: Redis or Memcached DB) Understanding on Queues and Tasks (E.g.: Rabbit MQ) Knowledge on SOLID design principles Experience in any one cloud platform (E.g.: AWS, Azure) Experience in building progressive web apps using React JS or Flutter (Dart) Knowledge on Containerization using Dockers and/or Kubernetes and scaling. Knowledge on CI/CD pipeline and build tools like Jenkins, JFrog, Openshift, etc Educational Qualifications Engineering Degree, Preferably in CS, ECE What we believe: We’re proud to embrace the same values that have shaped UST since the beginning. Since day one, we’ve been building enduring relationships and a culture of integrity. And today, it's those same values that are inspiring us to encourage innovation from everyone, to champion diversity and inclusion and to place people at the centre of everything we do. Humility: We will listen, learn, be empathetic and help selflessly in our interactions with everyone. Humanity: Through business, we will better the lives of those less fortunate than ourselves. Integrity: We honour our commitments and act with responsibility in all our relationships. Equal Employment Opportunity Statement UST is an Equal Opportunity Employer. We believe that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion, or sexual orientation. All employment decisions shall be made without regard to age, race, creed, colour, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. UST reserves the right to periodically redefine your roles and responsibilities based on the requirements of the organization and/or your performance. To support and promote the values of UST. Comply with all Company policies and procedures Skills design,CSS,Html,Javascript Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Growexx is seeking a talented and motivated Software Engineer to join our growing engineering team. You will play a key role in designing, developing, and maintaining scalable software solutions that power our analytics platform. This is an exciting opportunity to work on impactful projects in a collaborative, fast-paced environment. Key Responsibilities Design, develop, test, and deploy high-quality software solutions Collaborate with product managers, designers, and data scientists to deliver new features and enhancements Write clean, maintainable, and efficient code following best practices Participate in code reviews and contribute to the continuous improvement of engineering processes Troubleshoot and resolve technical issues across the stack Stay current with emerging technologies and propose innovative solutions Key Skills Proficiency in one or more programming languages (e.g., Python, JavaScript, TypeScript, Go, Java) Experience with modern web frameworks (e.g., React, Angular, Vue) Familiarity with RESTful APIs, microservices, and cloud platforms (e.g., AWS, Azure, GCP) Strong problem-solving skills and attention to detail Preferred Experience with data visualization libraries (e.g., D3.js, Plotly) Knowledge of data pipelines, ETL processes, or big data technologie Familiarity with containerization (Docker, Kubernetes) Exposure to machine learning or AI-driven applications Education and Experience Bachelor’s degree in Computer Science, Engineering, or related field 2+ years of professional software development experience Analytical and Personal skills Must have good logical reasoning and analytical skills Ability to break big goals to small incremental actions Excellent Communication skills in English – both written and verbal Demonstrate Ownership and Accountability of their work Great attention to details Self-Criticizing Demonstrate ownership of tasks Positive and Cheerful outlook in life Show more Show less
Posted 2 weeks ago
5.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for a Python Data Engineer with expertise in real-time data monitoring, extraction, transformation, and visualization. The ideal candidate will have experience working with Oracle SQL databases, multithreading, and AI/ML techniques and should be proficient in deploying Python applications on IIS servers . The role involves developing a system to monitor live files and folders, extract data, transform it using various techniques, and display insights on a Plotly Dash-based dashboard . Responsibilities Backend & Frontend Development: Build end-to-end solutions using Python for both backend and frontend functionalities. Data Extraction & Transformation: Implement data cleaning, regex, formatting, and data handling to process extracted information. Database Management: Insert and update records in an Oracle SQL database, ensuring data integrity and efficiency. Live File & Folder Monitoring: Develop Python scripts using Watchdog to monitor logs, detect new files/folders, and extract data in real time. Fetch live data from the database using multithreading for smooth real-time updates. Data Visualization: Develop an interactive dashboard using Plotly Dash or react for real-time data representation. Data Analytics & Pattern Finding: Perform exploratory data analysis (EDA) to identify trends, anomalies, and key insights. Cloud & AI/ML Integration: Leverage AI/ML techniques for data processing. Deployment & Maintenance: Deploy applications on an IIS server/Cloud and ensure system scalability and security. Qualifications BE/BTECH degree in Computer Science, EE, or related field. Essential Skills Strong Python programming skills Experience with Watchdog for real-time monitoring. Expertise in Oracle SQL (data insertion, updates, query optimization). Knowledge of AI/ML techniques and their practical applications. Hands-on experience with Plotly Dash/React/Angular any UI framework for dashboard development. Familiarity with IIS deployment and troubleshooting. Good understanding of data cleaning, ETL pipelines, and real-time data streaming. Strong debugging and problem-solving skills. Prior experience working on real-time monitoring systems. Experience Year of Experience: 5 - 6 years Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Deccan AI Experts Community (By Soul AI), where you become a creator, not just a consumer. We are reaching out to the top 1% of Soul AI’s Data Visualization Engineers like YOU for a unique job opportunity to work with the industry leaders. What’s in it for you? pay above market standards The role is going to be contract based with project timelines from 2 - 6 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote Onsite on client location: US, UAE, UK, India etc. Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Architect and implement enterprise-level BI solutions to support strategic decision-making along with data democratization by enabling self-service analytics for non-technical users. Lead data governance and data quality initiatives to ensure consistency and design data pipelines and automated reporting solutions using SQL and Python. Optimize big data queries and analytics workloads for cost efficiency and Implement real-time analytics dashboards and interactive reports. Mentor junior analysts and establish best practices for data visualization. Required Skills: Advanced SQL, Python (Pandas, NumPy), and BI tools (Tableau, Power BI, Looker). Expertise in AWS (Athena, Redshift), GCP (Big Query), or Snowflake. Experience with data governance, lineage tracking, and big data tools (Spark, Kafka). Exposure to machine learning and AI-powered analytics. Nice to Have: Experience with graph analytics, geospatial data, and visualization libraries (D3.js, Plotly). Hands-on experience with BI automation and AI-driven analytics. Who can be a part of the community? We are looking for top-tier Data Visualization Engineers with expertise in analyzing and visualizing complex datasets. Proficiency in SQL, Tableau, Power BI, and Python (Pandas, NumPy, Matplotlib) is a plus. If you have experience in this field then this is your chance to collaborate with industry leaders. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching: Be patient while we align your skills and preferences with the available project. 5 . Project Allocation: You’ll be deployed on your preferred project! Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Associate Analyst, R Programmer-3 Overview The Mastercard Economics Institute (MEI) is an economics lab powering scale at Mastercard by owning economic thought leadership in support of Mastercard’s efforts to build a more inclusive and sustainable digital economy The Economics Institute was launched in 2020 to analyze economic trends through the lens of the consumer to deliver tailored and actionable insights on economic issues for customers, partners and policymakers The Institute is composed of a team of economists and data scientists that utilize & synthesize the anonymized and aggregated data from the Mastercard network together with public data to bring powerful insights to life, in the form of 1:1 presentation, global thought leadership, media participation, and commercial work through the company’s product suites About The Role We are looking for an R programmer to join Mastercard’s Economics Institute, reporting to the team lead for Economics Technology. An individual who will: create clear, compelling data visualisations that communicate economic insights to diverse audiences develop reusable R functions and packages to support analysis and automation create and format analytical content using R Markdown and/or Quarto design and build scalable Shiny apps develop interactive visualisations using JavaScript charting libraries (e.g. Plotly, Highcharts, D3.js) or front-end frameworks (e.g. React, Angular, Vue.js)work with databases and data platforms (eg. SQL, Hadoop) write clear, well-documented code that others can understand and maintain collaborate using Git for version control All About You proficient in R and the RStudio IDE proficient in R packages like dplyr for data cleaning, transformation, and aggregation familiarity with dependency management and documentation in R (e.g. roxygen2) familiar with version control concepts and tools (e.g. Git, GitHub, Bitbucket) for collaborative development experience writing SQL and working with relational databases creative and passionate about data, coding, and technology strong collaborator who can also work independently organized and able to prioritise work across multiple projects comfortable working with engineers, product owners, data scientists, economists Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250450 Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Associate Analyst, R Programmer-2 Overview The Mastercard Economics Institute (MEI) is an economics lab powering scale at Mastercard by owning economic thought leadership in support of Mastercard’s efforts to build a more inclusive and sustainable digital economy The Economics Institute was launched in 2020 to analyze economic trends through the lens of the consumer to deliver tailored and actionable insights on economic issues for customers, partners and policymakers The Institute is composed of a team of economists and data scientists that utilize & synthesize the anonymized and aggregated data from the Mastercard network together with public data to bring powerful insights to life, in the form of 1:1 presentation, global thought leadership, media participation, and commercial work through the company’s product suites About The Role We are looking for an R programmer to join Mastercard’s Economics Institute, reporting to the team lead for Economics Technology. An individual who will: create clear, compelling data visualisations that communicate economic insights to diverse audiences develop reusable R functions and packages to support analysis and automation create and format analytical content using R Markdown and/or Quarto design and build scalable Shiny apps develop interactive visualisations using JavaScript charting libraries (e.g. Plotly, Highcharts, D3.js) or front-end frameworks (e.g. React, Angular, Vue.js)work with databases and data platforms (eg. SQL, Hadoop) write clear, well-documented code that others can understand and maintain collaborate using Git for version control All About You proficient in R and the RStudio IDE proficient in R packages like dplyr for data cleaning, transformation, and aggregation familiarity with dependency management and documentation in R (e.g. roxygen2) familiar with version control concepts and tools (e.g. Git, GitHub, Bitbucket) for collaborative development experience writing SQL and working with relational databases creative and passionate about data, coding, and technology strong collaborator who can also work independently organized and able to prioritise work across multiple projects comfortable working with engineers, product owners, data scientists, economists Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250449 Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Associate Analyst, R Programmer-1 Overview The Mastercard Economics Institute (MEI) is an economics lab powering scale at Mastercard by owning economic thought leadership in support of Mastercard’s efforts to build a more inclusive and sustainable digital economy The Economics Institute was launched in 2020 to analyze economic trends through the lens of the consumer to deliver tailored and actionable insights on economic issues for customers, partners and policymakers The Institute is composed of a team of economists and data scientists that utilize & synthesize the anonymized and aggregated data from the Mastercard network together with public data to bring powerful insights to life, in the form of 1:1 presentation, global thought leadership, media participation, and commercial work through the company’s product suites About The Role We are looking for an R programmer to join Mastercard’s Economics Institute, reporting to the team lead for Economics Technology. An individual who will: create clear, compelling data visualisations that communicate economic insights to diverse audiences develop reusable R functions and packages to support analysis and automation create and format analytical content using R Markdown and/or Quarto design and build scalable Shiny apps develop interactive visualisations using JavaScript charting libraries (e.g. Plotly, Highcharts, D3.js) or front-end frameworks (e.g. React, Angular, Vue.js)work with databases and data platforms (eg. SQL, Hadoop) write clear, well-documented code that others can understand and maintain collaborate using Git for version control All About You proficient in R and the RStudio IDE proficient in R packages like dplyr for data cleaning, transformation, and aggregation familiarity with dependency management and documentation in R (e.g. roxygen2) familiar with version control concepts and tools (e.g. Git, GitHub, Bitbucket) for collaborative development experience writing SQL and working with relational databases creative and passionate about data, coding, and technology strong collaborator who can also work independently organized and able to prioritise work across multiple projects comfortable working with engineers, product owners, data scientists, economists Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250448 Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Data Engineer – Job Description We are looking for a highly skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. This role requires expertise in Python, PySpark, SQL, and modern cloud platforms such as Snowflake. The ideal candidate will collaborate with business stakeholders and analytics teams to ensure the efficient collection, transformation, and delivery of data to power insights and decision-making. Responsibilities Understand business requirements, system designs, and security standards. Collaborate with SMEs to analyze existing processes, gather functional requirements, and identify improvements. Build and streamline data pipelines using Python, PySpark, SQL, and Spark from various data sources. Support data cataloging and knowledge base development. Develop tools for analytics and data science teams to optimize data product consumption. Enhance data system functionality in collaboration with data and analytics experts. Communicate insights using statistical analysis, data visualization, and storytelling techniques. Manage technical and business documentation for all data engineering efforts. Participate in hands-on development and coordinate with onshore/offshore teams. Requirements 5+ years of experience building data pipelines on on-premise and cloud platforms (e.g., Snowflake). Strong expertise in Python, PySpark, and SQL for data ingestion, transformation, and automation. Experience in developing Python-based applications with visualization libraries such as Plotly and Streamlit. Solid knowledge of data engineering concepts and practices including metadata management and data governance. Proficient in using cloud-based data warehousing and data lake environments. Familiarity with ELT/ETL tools like DBT and Cribl. Experience with incremental data capture, stream ingestion, and real-time data processing. Preferred Qualifications Background in cybersecurity, IT infrastructure, or software systems. 3+ years of experience in cloud-based data warehouse and data lake architectures. Hands-on experience with data visualization tools (e.g., Tableau, Plotly, Streamlit). Strong communication skills and ability to translate complex data into actionable insights. Technical Skills Python PySpark SQL Snowflake (or other cloud data platforms) Plotly, Streamlit, Flask, Dask ELT/ETL tools (DBT, Cribl) Data visualization (Tableau, Plotly) Metadata management & data governance Stream processing & real-time data ingestion Skills Python,Sql,Cloud Platform Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
What You'll be doing: Dashboard Development: Design, develop, and maintain interactive and visually compelling dashboards using Power BI. Implement DAX queries and data models to support business intelligence needs. Optimize performance and usability of dashboards for various stakeholders. Python & Streamlit Applications: Build and deploy lightweight data applications using Streamlit for internal and external users. Integrate Python libraries (e.g., Pandas, NumPy, Plotly, Matplotlib) for data processing and visualization. Data Integration & Retrieval: Connect to and retrieve data from RESTful APIs, cloud storage (e.g., Azure Data Lake, Cognite Data Fusion, and SQL/NoSQL databases. Automate data ingestion pipelines and ensure data quality and consistency. Collaboration & Reporting: Work closely with business analysts, data engineers, and stakeholders to gather requirements and deliver insights. Present findings and recommendations through reports, dashboards, and presentations. Requirements: Bachelor’s or master’s degree in computer science, Data Science, Information Systems, or a related field. 3+ years of experience in data analytics or business intelligence roles. Proficiency in Power BI, including DAX, Power Query, and data modeling. Strong Python programming skills, especially with Streamlit, Pandas, and API integration. Experience with REST APIs, JSON/XML parsing, and cloud data platforms (Azure, AWS, or GCP). Familiarity with version control systems like Git. Excellent problem-solving, communication, and analytical skills. Preferred Qualifications: Experience with CI/CD pipelines for data applications. Knowledge of DevOps practices and containerization (Docker). Exposure to machine learning or statistical modeling is a plus. Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us: Athena is India's largest institution in the "premium undergraduate study abroad" space. Founded 10 years ago by two Princeton graduates, Poshak Agrawal and Rahul Subramaniam, Athena is headquartered in Gurgaon, with offices in Mumbai and Bangalore, and caters to students from 26 countries. Athena’s vision is to help students become the best version of themselves. Athena’s transformative, holistic life coaching program embraces both depth and breadth, sciences and the humanities. Athena encourages students to deepen their theoretical knowledge and apply it to address practical issues confronting society, both locally and globally. Through our flagship program, our students have gotten into various, universities including Harvard University, Princeton University, Yale University, Stanford University, University of Cambridge, MIT, Brown, Cornell University, University of Pennsylvania, University of Chicago, among others. Learn more about Athena: https://www.athenaeducation.co.in/article.aspx Role Overview We are looking for an AI/ML Engineer who can mentor high-potential scholars in creating impactful technology projects. This role requires a blend of strong engineering expertise, the ability to distill complex topics into digestible concepts, and a deep passion for student-driven innovation. You’ll help scholars explore the frontiers of AI—from machine learning models to generative AI systems—while coaching them in best practices and applied engineering. Key Responsibilities: Guide scholars through the full AI/ML development cycle—from problem definition, data exploration, and model selection to evaluation and deployment. Teach and assist in building: Supervised and unsupervised machine learning models. Deep learning networks (CNNs, RNNs, Transformers). NLP tasks such as classification, summarization, and Q&A systems. Provide mentorship in Prompt Engineering: Craft optimized prompts for generative models like GPT-4 and Claude. Teach the principles of few-shot, zero-shot, and chain-of-thought prompting. Experiment with fine-tuning and embeddings in LLM applications. Support scholars with real-world datasets (e.g., Kaggle, open data repositories) and help integrate APIs, automation tools, or ML Ops workflows. Conduct internal training and code reviews, ensuring technical rigor in projects. Stay updated with the latest research, frameworks, and tools in the AI ecosystem. Technical Requirements: Proficiency in Python and ML libraries: scikit-learn, XGBoost, Pandas, NumPy. Experience with deep learning frameworks : TensorFlow, PyTorch, Keras. Strong command of machine learning theory , including: Bias-variance tradeoff, regularization, and model tuning. Cross-validation, hyperparameter optimization, and ensemble techniques. Solid understanding of data processing pipelines , data wrangling, and visualization (Matplotlib, Seaborn, Plotly). Advanced AI & NLP Experience with transformer architectures (e.g., BERT, GPT, T5, LLaMA). Hands-on with LLM APIs : OpenAI (ChatGPT), Anthropic, Cohere, Hugging Face. Understanding of embedding-based retrieval , vector databases (e.g., Pinecone, FAISS), and Retrieval-Augmented Generation (RAG). Familiarity with AutoML tools , MLflow, Weights & Biases, and cloud AI platforms (AWS SageMaker, Google Vertex AI). Prompt Engineering & GenAI Proficiency in crafting effective prompts using: Instruction tuning Role-playing and system prompts Prompt chaining tools like LangChain or LlamaIndex Understanding of AI safety , bias mitigation, and interpretability. Required Qualifications: Bachelor’s degree from a Tier-1 Engineering College in Computer Science, Engineering, or a related field. 2-5 years of relevant experience in ML/AI roles. Portfolio of projects or publications in AI/ML (GitHub, blogs, competitions, etc.) Passion for education, mentoring , and working with high school scholars. Excellent communication skills, with the ability to convey complex concepts to a diverse audience. Preferred Qualifications: Prior experience in student mentorship, teaching, or edtech. Exposure to Arduino, Raspberry Pi, or IoT for integrated AI/ML projects. Strong storytelling and documentation abilities to help scholars write compelling project reports and research summaries. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as a Data Scientist and help fast-growing startups transform data into actionable insights, predictive models, and intelligent decision-making tools. You’ll work on real-world data challenges across domains like marketing, finance, healthtech, and AI—with full flexibility to work remotely and choose the engagements that best fit your goals. Role Overview As a Data Scientist, you will: Extract Insights from Data: Analyze complex datasets to uncover trends, patterns, and opportunities. Build Predictive Models: Develop, validate, and deploy machine learning models that solve core business problems. Communicate Clearly: Work with cross-functional teams to present findings and deliver data-driven recommendations. What You’ll Do Analytics & Modeling: Explore, clean, and analyze structured and unstructured data using statistical and ML techniques. Build predictive and classification models using tools like scikit-learn, XGBoost, TensorFlow, or PyTorch. Conduct A/B testing, customer segmentation, forecasting, and anomaly detection. Data Storytelling & Collaboration: Present complex findings in a clear, actionable way using data visualizations (e.g., Tableau, Power BI, Matplotlib). Work with product, marketing, and engineering teams to integrate models into applications or workflows. Technical Requirements & Skills Experience: 3+ years in data science, analytics, or a related field. Programming: Proficient in Python (preferred), R, and SQL. ML Frameworks: Experience with scikit-learn, TensorFlow, PyTorch, or similar tools. Data Handling: Strong understanding of data preprocessing, feature engineering, and model evaluation. Visualization: Familiar with visualization tools like Matplotlib, Seaborn, Plotly, Tableau, or Power BI. Bonus: Experience working with large datasets, cloud platforms (AWS/GCP), or MLOps practices. What We’re Looking For A data-driven thinker who can go beyond numbers to tell meaningful stories. A freelancer who enjoys solving real business problems using machine learning and advanced analytics. A strong communicator with the ability to simplify complex models for stakeholders. Why Join Us? Immediate Impact: Work on projects that directly influence product, growth, and strategy. Remote & Flexible: Choose your working hours and project commitments. Future Opportunities: BeGig will continue matching you with data science roles aligned to your strengths. Dynamic Network: Collaborate with startups building data-first, insight-driven products. Ready to turn data into decisions? Apply now to become a key Data Scientist for our client and a valued member of the BeGig network! Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France