Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
3 - 4 Lacs
Chennai
On-site
Job Summary: We are looking for a skilled Python Developer with 3 to 6 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Job Features Job Category Software Division
Posted 1 month ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Role Overview We are looking for a confident Security Engineer/Researcher position with experience in IT-Security for our Core Research labs in India. McAfee believes that no one person, product, or organization can fight cybercrime alone. It's why we rebuilt McAfee around the idea of working together. Life at McAfee is full of possibility. You’ll have the freedom to explore challenges, take smart risks, and reach your potential in one of the fastest-growing industries in the world. You’ll be part of a team that supports and inspires you. This is a hybrid position based in Bangalore. You must be within a commutable distance from the location. You will be required to be onsite on an as-needed basis; when not working onsite, you will work remotely from your home location About The Role Understand threat telemetry trends and identify patterns to reduce time to detect. Develop automation to harvest malware threat intelligence from various sources such as product telemetry, OSINT, Dark Web monitoring, spam monitoring, etc. Develop early identification and alert systems for threats based on various online platforms and product telemetry. Utilize various data mining tools that analyze data inline based on intelligence inputs. Analyze malware communication and techniques to find Indicators of Compromise (IOC) or Indicators of Attack (IOA). Authoring descriptions for malware either via McAfee Virus Information Library, Threat Advisories, Whitepapers, or Blogs. About You You should have 7+ years of experience as a security/threat/malware analyst. Programming Skills—Knowledge of programming languages like Python and its packages like NumPy, Matplotlib, and Seaborn is desirable. Data source accesses like Spark and SQL are desirable. Machine Learning knowledge is added advantage. Familiarity with UI & dashboard tools like Jupyter and Databricks is an added advantage. Excellent Communication Skills—It is incredibly important to describe findings to a technical and non-technical audience. Company Overview McAfee is a leader in personal security for consumers. Focused on protecting people, not just devices, McAfee consumer solutions adapt to users’ needs in an always online world, empowering them to live securely through integrated, intuitive solutions that protects their families and communities with the right security at the right moment. Company Benefits And Perks We work hard to embrace diversity and inclusion and encourage everyone at McAfee to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Bonus Program Pension and Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement We're serious about our commitment to diversity which is why McAfee prohibits discrimination based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.
Posted 1 month ago
0 years
0 Lacs
India
Remote
Data Science Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is a fast-growing edtech startup offering hands-on, project-based virtual internships designed to prepare students and fresh graduates for today’s tech-driven industry. The Data Science Internship focuses on real-world applications of machine learning, statistics, and data engineering to solve meaningful problems. 🚀 Internship Overview As a Data Science Intern , you'll explore large datasets, build models, and deliver predictive insights. You'll work with machine learning algorithms , perform data wrangling , and communicate your results with visualizations and reports. 🔧 Key Responsibilities Collect, clean, and preprocess structured and unstructured data Apply machine learning models for regression, classification, clustering, and NLP Work with tools like Python, Jupyter Notebook, Scikit-learn, TensorFlow , and Pandas Conduct exploratory data analysis (EDA) to discover trends and insights Visualize data using Matplotlib, Seaborn , or Power BI/Tableau Collaborate with other interns and mentors in regular review and feedback sessions Document your work clearly and present findings to the team ✅ Qualifications Pursuing or recently completed a degree in Data Science, Computer Science, Statistics, or a related field Proficiency in Python and understanding of libraries such as Pandas, NumPy, Scikit-learn Basic knowledge of machine learning algorithms and statistical concepts Familiarity with data visualization tools and SQL Problem-solving mindset and keen attention to detail Enthusiastic about learning and applying data science to real-world problems 🎓 What You’ll Gain Hands-on experience working with real datasets and ML models A portfolio of projects that demonstrate your data science capabilities Internship Certificate upon successful completion Letter of Recommendation for top-performing interns Opportunity for a Full-Time Offer based on performance Exposure to industry-standard tools, workflows, and best practices
Posted 1 month ago
8.0 - 10.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Job Title: Senior AI/ML Developer Experience: 8-10 Years Location: [Mumbai] Job Type: [Full-Time] Key Responsibilities: Lead the design and development of machine learning models using Python, TensorFlow, and other AI/ML frameworks. Build, train, and optimize machine learning models to improve business processes and outcomes. Work with large datasets in distributed environments using PySpark, Hadoop, and HIVE. Analyze and preprocess data, clean datasets, and implement feature engineering techniques. Collaborate with data scientists, engineers, and product teams to deliver AI-powered solutions. Conduct model analysis and performance evaluations, ensuring the accuracy and effectiveness of ML models. Maintain and document machine learning workflows and processes in a collaborative environment. Utilize Git for version control and JIRA for task management and project tracking. Continuously monitor and improve model performance, ensuring scalability and efficiency. Stay updated with the latest trends and advancements in AI/ML to enhance development capabilities. Skills and Qualifications: 8-10 years of experience in AI/ML development with a strong focus on model building and analysis. Strong proficiency in Python for developing machine learning algorithms and solutions. Experience with PySpark, Hadoop, and HIVE for working with large datasets in a distributed environment. Hands-on experience with TensorFlow for deep learning model development and training. Solid understanding of machine learning algorithms, techniques, and frameworks. Experience with notebooks (e.g., Jupyter) for model development, analysis, and experimentation. Strong skills in model evaluation, performance tuning, and optimization. Proficiency with Git for version control and JIRA for project management. Excellent problem-solving skills and attention to detail. Strong communication skills and ability to collaborate with cross-functional teams. Preferred Qualifications: Experience with cloud platforms (AWS, Azure, GCP) for machine learning deployment. Knowledge of additional AI/ML frameworks such as Keras, Scikit-learn, or PyTorch. Familiarity with CI/CD pipelines and DevOps practices for machine learning. Experience with data visualization tools and libraries (e.g., Matplotlib, Seaborn). Background in statistical analysis and data mining.
Posted 1 month ago
6.0 years
0 Lacs
India
Remote
About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Job Title: Lead Data Scientist Mode of work : Remote Responsibilities Design and implement data-driven solutions to optimize customer experience metrics, reduce churn, and enhance customer satisfaction using statistical analysis, machine learning, and predictive modeling. Collaborate with CX teams, contact center operations, customer success, and product teams to gather requirements, understand customer journey objectives, and translate them into actionable analytical solutions. Perform exploratory data analysis (EDA) on customer interaction data, contact center metrics, survey responses, and behavioral data to identify pain points and opportunities for CX improvement. Build, validate, and deploy machine learning models for customer sentiment analysis, churn prediction, next-best-action recommendations, contact center forecasting, and customer lifetime value optimization. Develop CX dashboards and reports using BI tools to track key metrics like NPS, CSAT, FCR, AHT, and customer journey analytics to support strategic decision-making. Optimize model performance for real-time customer experience applications through hyperparameter tuning, A/B testing, and continuous performance monitoring. Contribute to customer data architecture and pipeline development to ensure scalable and reliable customer data flows across touchpoints (voice, chat, email, social, web). Document CX analytics methodologies, customer segmentation strategies, and model outcomes to ensure reproducibility and enable knowledge sharing across CX transformation initiatives. Mentor junior data scientists and analysts on CX-specific use cases, and participate in code reviews to maintain high-quality standards for customer-facing analytics. Skill Requirements Proven experience (at least 6+ years) in data science, analytics, and statistical modeling with specific focus on customer experience, contact center analytics, or customer behavior analysis, including strong understanding of CX metrics, customer journey mapping, and voice-of-customer analytics. Proficiency in Python and/or R for customer data analysis, sentiment analysis, and CX modeling applications. Experience with data analytics libraries such as pandas, NumPy, scikit-learn, and visualization tools like matplotlib, seaborn, or Plotly for customer insights and CX reporting. Experience with machine learning frameworks such as Scikit-learn, XGBoost, LightGBM, and familiarity with deep learning libraries (TensorFlow, PyTorch) for NLP applications in customer feedback analysis and chatbot optimization. Solid understanding of SQL and experience working with customer databases, contact center data warehouses, and CRM systems (e.g., PostgreSQL, MySQL, SQL Server, Salesforce, ServiceNow). Familiarity with data engineering tools and frameworks (e.g., Apache Airflow, dbt, Spark, or similar) for building and orchestrating customer data ETL pipelines and real-time streaming analytics. (Good to have) Knowledge of data governance, data quality frameworks, and data lake architectures. (good to have) Exposure to business intelligence (BI) tools such as Power BI, Tableau, or Looker for CX dashboarding, customer journey visualization, and executive reporting on customer experience metrics. Working knowledge of version control systems (e.g., Git) and collaborative development workflows for customer analytics projects. Strong problem-solving skills with customer-centric analytical thinking, and the ability to work independently and as part of cross-functional CX transformation teams. Excellent communication and presentation skills, with the ability to explain complex customer analytics concepts to non-technical stakeholders including CX executives, contact center managers, and customer success teams. Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.
Posted 1 month ago
1.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Red & White Education Pvt Ltd , founded in 2008, is Gujarat's leading educational institute. Accredited by NSDC and ISO, we focus on Integrity, Student-Centricity, Innovation, and Unity. Our goal is to equip students with industry-relevant skills and ensure they are employable globally. Join us for a successful career path. Salary - 30K CTC TO 35K CTC Job Description: Faculties guide students, deliver course materials, conduct lectures, assess performance, and provide mentorship. Strong communication skills and a commitment to supporting students are essential. Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics.
Posted 1 month ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greetings! One of our esteemed client Japanese multinational information technology (IT) service and consulting company headquartered in Tokyo, Japan. The company acquired Italy -based Value Team S.p.A. and launched Global One Teams. Join this dynamic, high-impact firm where innovation meets opportunity — and take your career to new height s! 🔍 We Are Hiring: Python with Gen AI Only Chennai local candidates required and face to face Interview mandatory. Skillset: Python – 4+ Yrs experience GEN AI – 2 Yrs Experience Experience in RAG, Vector Store and Azure open AI Open position - 4 Location - Chennai (Hybrid) Interview - F2F - Chennai DLF • Technical skills • 5 to 6 years’ Experience in developing Python frameworks such DL, ML, Fast API, Flask • At least 3 years of experience in developing generative AI models using python and relevant frameworks. • Strong knowledge of machine learning, deep learning, and generative AI concepts and algorithms. • Proficient in python and common libraries such as numpy, pandas, matplotlib, and scikit-learn. • Familiar with version control, testing, debugging, and deployment tools. • Excellent communication and problem-solving skills. • Curious and eager to learn new technologies and domains. Interested candidates, please share your updated resume along with the following details : Total Experience: Relevant Experience in Python with Gen AI: Current Loc Current CTC: Expected CTC: Notice Period: 🔒 We assure you that your profile will be handle d with strict confiden tial ity. 📩 Apply now and be part of this incredible journey Thanks, Syed Mohammad!! syed.m@anlage.co.in
Posted 1 month ago
7.0 years
0 Lacs
Delhi, India
On-site
Role Expectations: Data Collection and Cleaning: Collect, organize, and clean large datasets from various sources (internal databases, external APIs, spreadsheets, etc.). Ensure data accuracy, completeness, and consistency by cleaning and transforming raw data into usable formats. Data Analysis: Perform exploratory data analysis (EDA) to identify trends, patterns, and anomalies. Conduct statistical analysis to support decision-making and uncover insights. Use analytical methods to identify opportunities for process improvements, cost reductions, and efficiency enhancements. Reporting and Visualization: Create and maintain clear, actionable, and accurate reports and dashboards for both technical and non-technical stakeholders. Design data visualizations (charts, graphs, and tables) that communicate findings effectively to decision-makers. Worked on PowerBI , Tableau and Pythoin Libraries for Data visualization like matplotlib , seaborn , plotly , Pyplot , pandas etc Experience in generating the Descriptive , Predictive & prescriptive Insights with Gen AI using MS Copilot in PowerBI. Experience in Prompt Engineering & RAG Architectures Prepare reports for upper management and other departments, presenting key findings and recommendations. Collaboration: Work closely with cross-functional teams (marketing, finance, operations, etc.) to understand their data needs and provide actionable insights. Collaborate with IT and database administrators to ensure data is accessible and well-structured. Provide support and guidance to other teams regarding data-related questions or issues. Data Integrity and Security: Ensure compliance with data privacy and security policies and practices. Maintain data integrity and assist with implementing best practices for data storage and access. Continuous Improvement: Stay current with emerging data analysis techniques, tools, and industry trends. Recommend improvements to data collection, processing, and analysis procedures to enhance operational efficiency. Qualifications: Education: Bachelor's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field. A Master's degree or relevant certifications (e.g., in data analysis, business intelligence) is a plus. Experience: Proven experience as a Data Analyst or in a similar analytical role (typically 7+ years). Experience with data visualization tools (e.g., Tableau, Power BI, Looker). Strong knowledge of SQL and experience with relational databases. Familiarity with data manipulation and analysis tools (e.g., Python, R, Excel, SPSS). Worked on PowerBI , Tableau and Pythoin Libraries for Data visualization like matplotlib , seaborn , plotly , Pyplot , pandas etc Experience with big data technologies (e.g., Hadoop, Spark) is a plus. Technical Skills: Proficiency in SQL and data query languages. Knowledge of statistical analysis and methodologies. Experience with data visualization and reporting tools. Knowledge of data cleaning and transformation techniques. Familiarity with machine learning and AI concepts is an advantage (for more advanced roles). Soft Skills: Strong analytical and problem-solving abilities. Excellent attention to detail and ability to identify trends in complex data sets. Good communication skills to present data insights clearly to both technical and non-technical audiences. Ability to work independently and as part of a team. Strong time management and organizational skills, with the ability to prioritize tasks effectively.
Posted 1 month ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary We are seeking a highly skilled and motivated Data Analyst with strong Python programming skills to join our growing team. The ideal candidate will be passionate about uncovering insights from data and using those insights to drive business decisions. You will be responsible for collecting, analyzing, and interpreting complex datasets, developing data-driven solutions, and communicating findings to : Collect data from various sources, including databases, APIs, and other data repositories. Perform data cleaning, transformation, and manipulation using Python libraries such as Pandas and NumPy. Conduct exploratory data analysis (EDA) to identify trends, patterns, and anomalies in the data. Develop and implement statistical models and machine learning algorithms using Python libraries like Scikit-learn to solve business problems. Create data visualizations using Python libraries such as Matplotlib and Seaborn to communicate insights effectively. Build and maintain data pipelines to automate data extraction, transformation, and loading (ETL) processes. Collaborate with cross-functional teams, including product, engineering, and marketing, to understand their data needs and provide actionable insights. Develop and maintain documentation of data analysis processes and results. Stay up-to-date with the latest trends and technologies in data analysis and Python programming. Present data findings and recommendations to stakeholders in a clear and concise manner. Skills And Qualifications Bachelor's degree in a quantitative field such as Statistics, Mathematics, Economics, Computer Science, or a related field. Proven experience 5+ years as a Data Analyst. Strong proficiency in Python programming, including experience with data analysis libraries such as Pandas, NumPy, and Scikit-learn. Experience with data visualization libraries such as Matplotlib and Seaborn. Solid understanding of SQL and relational databases. Experience with data warehousing and ETL processes is a plus. Strong analytical and problem-solving skills. Excellent communication and presentation skills. Ability to work independently and as part of a team. Strong business acumen and the ability to translate data insights into business Qualifications : Master's degree in a relevant field. Experience with cloud-based data platforms such as AWS, Azure, or GCP. Experience with big data technologies such as Spark or Hadoop. Knowledge of statistical modeling techniques. Experience with machine learning algorithms. (ref:hirist.tech)
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title : Data : Bachelor's or master's degree in computer science, Statistics, Mathematics, or a related field Experience : 6 To 10 Year(s) Skill set : Artificial Intelligence / Machine Learning Domain Knowledge Strong working knowledge of the Google Cloud Platform (GCP) and its AI/ML services Proven experience in chatbot creation and development using relevant frameworks Proven experience in developing and implementing machine learning models. Strong programming skills in Python, with expertise in Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, Keras , Matplotlib and Seaborn. Proficiency in SQL querying and database management. Experience with front-end frameworks such as React or Angular and CSS. Experience with back-end frameworks such as Django, Flask, or FastAPI. Experience in prompt engineering for large language models (LLMs), including prompt design, optimization, and evaluation. Strong problem-solving skills and the ability to translate business requirements into technical solutions. Excellent communication and presentation skills, with the ability to explain complex concepts to non-technical audiences. Job Description Deploy and manage AI/ML applications on the Google Cloud Platform (GCP) Design, develop, and implement conversational AI solutions using various chatbot frameworks and platforms Design, develop, and optimize prompts for large language models (LLMs) to achieve desired outputs Develop and implement machine learning models using supervised, unsupervised, and reinforcement learning algorithms. Utilize Python with Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, and Keras to build and deploy machine learning solutions. Create visualizations using Matplotlib and Seaborn to communicate insights Write and optimize SQL queries to extract and manipulate data from various databases. Develop and maintain web applications and APIs using Python frameworks such as Django, Flask, or FastAPI. Build user interfaces using JavaScript frameworks such as React or Angular, along with CSS. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Communicate complex technical concepts and findings to non-technical stakeholders through presentations and reports. (ref:hirist.tech)
Posted 1 month ago
2.0 - 4.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Skills: Python, PyTorch, aws, Data Visualization, Machine Learning, ETL, Experience: 2 4 Years Location: Bangalore (In-office) Employment Type: Full-Time About The Role We are hiring a Junior Data Scientist to join our growing data team in Bangalore. Youll work alongside experienced data professionals to build models, generate insights, and support analytical solutions that solve real business problems. Responsibilities Assist in data cleaning, transformation, and exploratory data analysis (EDA). Develop and test predictive models under guidance from senior team members. Build dashboards and reports to communicate insights to stakeholders. Work with cross-functional teams to implement data-driven initiatives. Stay updated with modern data tools, algorithms, and techniques. Requirements 24 years of experience in a data science or analytics role. Proficiency in Python or R, SQL, and key data libraries (Pandas, NumPy, Scikit-learn). Experience with data visualization tools (Matplotlib, Seaborn, Tableau, Power BI). Basic understanding of machine learning algorithms and model evaluation. Strong problem-solving ability and eagerness to learn. Good communication and teamwork skills.
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Chennai
Work from Office
Data Engineer Experience Range: 05 - 12 years Location of Requirement: Chennai Desired Candidate Profile: Languages: Python, R, SQL, T-SQL Visualisation: Tableau, Power BI, Matplotlib, Looker Big Data: Hadoop, Spark Skills: Database Performance, Query Tuning, Schema Designing, Dataset aggregation, query optmization
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Data Engineer Experience Range: 05 - 12 years Location of Requirement: Chennai Desired Candidate Profile: Languages: Python, R, SQL, T-SQL Visualisation: Tableau, Power BI, Matplotlib, Looker Big Data: Hadoop, Spark Skills: Database Performance, Query Tuning, Schema Designing, Dataset aggregation, query optmization
Posted 1 month ago
1.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Data Analyst Location: Marol, Mumbai (Work from Office) Company: India On Track Industry: Sports Job Type: Full-time Experience Level: 1-2 Years Joining Date: Within a month About India on Track: India on Track (IOT) has been set up to inculcate a culture of sport amongst the youth and create a platform to learn and participate in various disciplines. Working in a sector that is driven by passion, we are unwavering in our commitment to help India recognize and celebrate the power of sports, be it recreational or professional. IOT wants to equip India’s next generation with the tools to help build a healthier, stronger nation that would have its roots as much in sports as other disciplines. We make this simple idea come to life through our grassroot initiatives executed in a secure environment using world class training and conditioning techniques. To ensure this, IOT partners with top international sporting entities (like NBA Basketball Schools & LaLiga Academy School, amongst others) to bring best-in-class sports thinking and philosophy to India. Each partnership focuses on the amalgamation of the technical expertise of these leaders in world sport and IOT's management experience and vision for India. In India, IOT runs 85+ Centres across 14 Cities, has over 80 Coaches and trains over 20K kids. IOT also runs Residential International Development Programs in Portugal & Spain, and its fast expanding in other regions. About The Role: The Data Analyst is responsible for analyzing and interpreting complex sports data to provide actionable insights. This includes working with athlete performance data, coach analytics, and business metrics to support decision-making across various sports-related operations. The role involves using tools such as SQL , Excel , Tableau , and Power BI to query databases, create dashboards, and generate reports that provide insights into player performance, team dynamics, and business outcomes. Python serves as an additional tool for automating data processes and performing advanced statistical analyses to uncover deeper insights from the data. Responsibilities 1. Data Collection & Querying (SQL) · Write efficient SQL queries to extract data from relational databases. · Create and optimize complex queries, including joins, subqueries, CTEs, and aggregations. · Ensure accurate data extraction and troubleshoot any issues in data queries. · Work with large datasets and manage database connections to maintain data integrity. 2. Data Cleaning & Transformation (Excel) · Use Excel for data preparation, including cleaning, transforming, and organizing data. · Apply advanced Excel functions (e.g., VLOOKUP, INDEX-MATCH, IF Statements) to manipulate and analyze data. · Build and maintain PivotTables and PivotCharts to summarize and visualize data trends. · Use Power Query and Power Pivot for more advanced data manipulation tasks in Excel. 3. Reporting & Visualization (Tableau & Power BI) · Develop interactive dashboards, reports, and visualizations using Tableau and Power BI. · Design reports that highlight key business metrics, trends, and insights. · Use advanced visualization features like calculated fields (Tableau), DAX measures (Power BI), and dynamic filters. · Automate report refresh cycles to ensure real-time data access in Tableau and Power BI. 4. Automation & Data Integration (Python - Optional but Valuable) · Utilize Python (primarily pandas and matplotlib) for data automation and visualization. · Write scripts to automate routine data processing tasks and data extraction from APIs. · Enhance data transformation processes that are not feasible directly in Excel or SQL. · Implement Python to clean and prepare data prior to creating reports and visualizations. 5. Collaboration & Communication · Partner with business teams to understand reporting needs and translate them into actionable data insights. · Present findings through clear visualizations and data-driven recommendations. · Provide ad-hoc data analysis and reporting to support business decisions. Compensation Subject to experience and performance, the expected compensation will fall in the range of 5-8 Lacs Per Annum.
Posted 1 month ago
7.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description In This Role, Your Responsibilities Will Be: Analyze large, complex data sets using statistical methods and machine learning techniques to extract meaningful insights. Develop and implement predictive models and algorithms to solve business problems and improve processes. Create visualizations and dashboards to effectively communicate findings and insights to stakeholders. Work with data engineers, product managers, and other team members to understand business requirements and deliver solutions. Clean and preprocess data to ensure accuracy and completeness for analysis. Prepare and present reports on data analysis, model performance, and key metrics to stakeholders and management. Participate in regular Scrum events such as Sprint Planning, Sprint Review, and Sprint Retrospective Stay updated with the latest industry trends and advancements in data science and machine learning techniques Who You are: You must be committed to self-development means you must look for ways to build skills that you will need in the future. You must learn and grow from experience. Opportunities will be available and you must be able to stretch yourself to execute better and be flexible to take up new activities.. For This Role, You Will Need: Bachelor’s degree in computer science, Data Science, Statistics, or a related field or a master's degree or higher is preferred. Total 7-10 years of industry experience More than 5 years of experience in a data science or analytics role, with a strong track record of building and deploying models. Excellent understanding of machine learning techniques and algorithms, such as GPTs, CNN, RNN, k-NN, Naive Bayes, SVM, Decision Forests, etc. Experience with NLP, NLG, and Large Language Models like – GPT , BERT, LLaMa, LaMDA, GPT, BLOOM, PaLM, DALL-E, etc. Proficiency in programming languages such as Python or R, and experience with data manipulation libraries (e.g., pandas, NumPy). Experience with machine learning frameworks and libraries such as Go, TensorFlow, PyTorch Familiarity with data visualization tools (e.g., Tableau, Power BI, Matplotlib, Seaborn). Experience with SQL and NoSQL databases such as MongoDB, Cassandra, Vector databases Strong analytical and problem-solving skills, with the ability to work with complex data sets and extract actionable insights. Excellent verbal and written communication skills, with the ability to present complex technical information to non-technical stakeholders. Preferred Qualifications that Set You Apart: Prior experience in engineering domain would be nice to have Prior experience in working with teams in Scaled Agile Framework (SAFe) is nice to have Possession of relevant certification/s in data science from reputed universities specializing in AI. Familiarity with cloud platforms, Microsoft Azure is preferred Ability to work in a fast-paced environment and manage multiple projects simultaneously. Strong analytical and troubleshooting skills, with the ability to resolve issues related to model performance and infrastructure. Our Commitment to Diversity, Equity & Inclusion At Emerson, we are committed to fostering a culture where every employee is valued and respected for their unique experiences and perspectives. We believe a diverse and inclusive work environment contributes to the rich exchange of ideas and diversity of thoughts, that inspires innovation and brings the best solutions to our customers. This philosophy is fundamental to living our company’s values and our responsibility to leave the world in a better place. Learn more about our Culture & Values and about Diversity, Equity & Inclusion at Emerson . If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com .
Posted 1 month ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position: Data Scientist - II Experience Level: 3-6 Years Industry: E-commerce Location: Vikhroli, Mumbai Employment Type: Full-time Advanced Data Analysis: Conduct in-depth analyses of large datasets from multiple sources, such as clickstream data, sales transactions, and user behavior, to uncover actionable insights. Machine Learning: Develop, implement, and maintain sophisticated machine learning models for use cases including recommendations, personalization, customer segmentation, demand forecasting, and price optimization. A/B Testing: Design and analyze experiments to evaluate the impact of new product features, marketing campaigns, and user experiences on business metrics. Data Engineering Collaboration: Work closely with data engineers to ensure robust, accurate, and scalable data pipelines for analysis and model deployment. Cross-functional Collaboration: Partner with product, marketing, and engineering teams to identify data needs, define analytical approaches, and deliver impactful insights. Dashboard Development: Create and maintain dashboards using modern visualization tools to present findings and track key performance metrics. Exploratory Data Analysis: Investigate trends, anomalies, and patterns in data to guide strategy and optimize performance across various business units. Optimization Strategies: Apply statistical and machine learning methods to optimize critical areas such as supply chain operations, customer acquisition, retention strategies, and pricing models. Required Skills Programming: Proficiency in Python (preferred) or R for data analysis and machine learning. SQL Expertise: Advanced skills in querying and managing large datasets. Machine Learning Frameworks: Hands-on experience with tools like Scikit-learn, TensorFlow, or PyTorch. Data Processing: Strong expertise in data wrangling and transformation for model readiness. A/B Testing: Deep understanding of experimental design and statistical inference. Visualization: Experience with tools such as Tableau, Power BI, Matplotlib, or Seaborn to create insightful visualizations. Statistics: Strong foundation in probability, hypothesis testing, and predictive modeling techniques. Communication: Exceptional ability to translate technical findings into actionable business insights. Preferred Qualifications Domain Knowledge: Prior experience with e-commerce datasets, including user behavior, transaction data, and inventory management. Big Data: Familiarity with Hadoop, Spark, or BigQuery for managing and analyzing massive datasets. Cloud Platforms: Proficiency with cloud platforms like AWS, Google Cloud Platform (GCP), or Azure for data storage, computation, and model deployment. Business Acumen: Understanding of critical e-commerce metrics such as conversion rates, customer lifetime value (LTV), and customer acquisition costs (CAC). Educational Qualifications Bachelor’s or Master’s degree in Data Science, Computer Science, Mathematics, Statistics, or a related quantitative field. Advanced certifications in machine learning or data science are a plus. About Company Founded in 2012, Purplle has emerged as one of India’s premier omnichannel beauty destinations, redefining the way millions shop for beauty. With 1,000+ brands, 60,000+ products, and over 7 million monthly active users, Purplle has built a powerhouse platform that seamlessly blends online and offline experiences. Expanding its footprint in 2022, Purplle introduced 6,000+ offline touchpoints and launched 8 exclusive stores, strengthening its presence beyond digital. Beyond hosting third-party brands, Purplle has successfully scaled its own D2C powerhouses—FACES CANADA, Good Vibes, Carmesi, Purplle, and NY Bae—offering trend-driven, high-quality beauty essentials. What sets Purplle apart is its technology driven hyper-personalized shopping experience. By curating detailed user personas, enabling virtual makeup trials, and delivering tailored product recommendations based on personality, search intent, and purchase behavior, Purplle ensures a unique, customer-first approach. In 2022, Purplle achieved unicorn status, becoming India’s 102nd unicorn, backed by an esteemed group of investors including ADIA, Kedaara, Premji Invest, Sequoia Capital India, JSW Ventures, Goldman Sachs, Verlinvest, Blume Ventures, and Paramark Ventures. With a 3,000+ strong team and an unstoppable vision, Purplle is set to lead the charge in India’s booming beauty landscape, revolutionizing the way the nation experiences beauty.
Posted 1 month ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Organization and Background Established in 1996, Esri India Technologies Pvt. Ltd. (Esri India), the market leader in geographic information system (GIS) software, location intelligence, and mapping solutions in India, helps customers unlock the maximum potential of their data to improve operational and business decisions. It has delivered pioneering enterprise GIS technology, powered by ArcGIS, to more than 6,500 organizations in government, private sector, academia, and non-profit sectors. The company has also introduced ‘Indo ArcGIS’, a unique GIS solution & data offering suited for government organizations. Esri India collaborates with a rich ecosystem of partner organizations to deliver GIS and location intelligence-based solutions. Headquartered in Noida (Delhi NCR), the company has 1 million users in the country and has got Great Place to Work Certified® in 2021, 2022, and 2023. Website:www.esri.in Role overview This position will work closely with customer to understand their needs to develop and deliver models for India specific GeoAI use cases. He/she will be responsible for conceptualizing and developing solutions using ESRI products. Additionally, the role demands representing the organization at conferences and forums, showcasing expertise and promoting ESRI solutions. Should be capable of working independently, exhibiting strong problem-solving skills, and effectively communicating complex geospatial concepts to diverse audiences. Roles & Responsibilities Consult closely with customers to understand their needs. Develop and pitch data science solutions by mapping business problems to machine learning or advanced analytics approaches. Build high-quality analytics systems that solve our customers’ business problems using techniques from data mining, statistics and machine learning. Write clean, collaborative and version-controlled code to process big data and streaming data from a variety of sources and types. Perform feature engineering, model selection and hyperparameter optimization to yield high predictive accuracy and deploy the model to production in a cloud, on-premises or hybrid environment. Implement best practices and patterns for geospatial machine learning and develop reusable technical components for demonstrations and rapid prototyping. Integrate ArcGIS with popular deep learning libraries such as PyTorch. Keep up to date with the latest technology trends in machine and deep learning and incorporate them in project delivery. Support in estimation and feasibility for various RFPs. Desired skillset 2+ years of practical machine learning experience or applicable academic/lab work Proven data science and AI skills with Python, PyTorch and Jupyter Notebooks Experience in building and optimizing supervised and unsupervised machine learning models including deep learning and various other modern data science techniques Expertise in one or more of the following areas: Traditional and deep learning-based computer vision techniques with the ability to develop deep learning models for computer vision tasks (image classification, object detection, semantic and instance segmentation, GANs, super-resolution, image inpainting, and more) Convolutional neural networks such as VGG, ResNet, Faster R-CNN, Mask R-CNN, and others Transformer models applied to computer vision o Expertise in 3D deep learning with Point Clouds, meshes, or Voxels with the ability to develop 3D geospatial deep learning models, such as PointCNN, MeshCNN, and more A fundamental understanding of mathematical and machine learning concepts such as calculus, back propagation, ReLU, Bayes’ theorem, Random Forests, time series analysis, etc Experience with applied statistics concept. Ability to perform data extraction, transformation, loading from multiple sources and sinks Experience in data visualization in Jupyter Notebooks using matplotlib and other libraries Experience with hyperparameter-tuning and training models to a high level of accuracy Experience in LLMs is preferred. Self-motivated, life-long learner. Non-negotiable skills Master's degree in RS, GIS, Geoinformatics, or a related field with knowledge of RS & GIS Knowledge and experience of ESRI products like ArcGIS Pro, ArcGIS Online, ArcGIS Enterprise, etc Experience with Image analysis and Image processing techniques in SAR, Multispectral and Hyperspectral imagery. Strong communication skills, including to non-technical audiences Should have strong Python coding skills. Should be open to travel and ready to work at client side (India).
Posted 1 month ago
0 years
0 Lacs
Nilambūr
On-site
Position: AI/ML Trainer in Python Type: Part-Time Location: Nilambur, Malappuram --- About the Role: We are seeking an experienced AI/ML Trainer proficient in Python to join our team. This role is ideal for a candidate passionate about teaching, guiding learners, and helping them gain practical, hands-on skills in AI and machine learning. As a trainer, you will deliver engaging sessions, create learning materials, and provide mentorship to learners. --- **Key Responsibilities:** - **Deliver Training Sessions:** Conduct engaging and effective training sessions covering AI/ML fundamentals, Python programming, and advanced AI/ML concepts. - **Curriculum Development:** Collaborate on designing course content, tutorials, and exercises for a comprehensive curriculum, including machine learning, deep learning, and AI applications. - **Hands-On Guidance:** Guide learners through hands-on projects and coding exercises, providing support and insights to help them achieve practical skills. - **Mentorship:** Provide personalized feedback, conduct one-on-one sessions as needed, and support learners throughout their educational journey. - **Assessment & Evaluation:** Prepare assessments, quizzes, and other evaluation materials to track and assess student progress. - **Stay Updated:** Keep up-to-date with advancements in AI/ML and Python, incorporating new trends and tools into the curriculum. --- **Required Skills and Qualifications:** - **Technical Expertise:** Strong proficiency in Python, with experience in machine learning libraries such as TensorFlow, PyTorch, scikit-learn, and Keras. - **Experience in AI/ML:** Solid knowledge of AI and machine learning concepts, including supervised/unsupervised learning, neural networks, deep learning, natural language processing, and data pre-processing. - **Teaching Experience:** Prior experience in teaching, tutoring, or training in a technical field is highly desirable. Ability to explain complex concepts in an easy-to-understand manner. - **Project-Based Learning Approach:** Familiarity with project-based learning and ability to guide students through real-world projects. - **Strong Communication Skills:** Excellent verbal and written communication skills, with a focus on clarity and student engagement. - **Adaptability:** Ability to adapt teaching methods to suit diverse learning styles and skill levels. --- **Preferred Qualifications:** - Bachelor's degree in Computer Science, Data Science, Artificial Intelligence, or a related field. - Industry experience in AI/ML projects or applications. - Experience with data visualization tools like Matplotlib or Seaborn. - Knowledge of deployment and production workflows for machine learning models (e.g., Flask, Docker, etc.). --- Job Types: Part-time, Contractual / Temporary, Freelance Contract length: 3 months Pay: ₹500.00 per day Expected hours: No more than 2 per week Schedule: Day shift Night shift Work Location: In person Application Deadline: 06/07/2025 Expected Start Date: 07/07/2025
Posted 1 month ago
35.0 years
0 Lacs
Chennai
On-site
About us One team. Global challenges. Infinite opportunities. At Viasat, we’re on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We’re looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team. What you'll do Parse and manipulate raw data leveraging tools including R, Python, Tableau, with a strong preference for Python Ingest, understand, and fully synthesize large amounts of data from multiple sources to build a full comprehension of the story Analyze large data sets, while finding the truth in data, and develop efficient processes for data analysis and simple, elegant visualization Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics Experience building intuitive and actionable dashboards and data visualizations that drive business decisions (Tableau/Power BI/Grafana) The day-to-day Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics What you'll need 3-4 years SQL experience 3-4 years data analysis experience with emphasis in reporting 3-4 years Python experience in data cleansing, statistics, and data visualization packages (i.e. pandas, scikit-learn, matplotlib, seaborn, plotly, etc.) 6-8 years dashboarding experience. Tableau/Power BI/Grafana experience or equivalent with data visualization tools Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution Able to identify stakeholders, build relationships, and influence others to drive progress Excellent analytical and problem solving skills Strong oral and written communication skills Strong statistical background What will help you on the job Strong preference for personal projects and work in Python Data Visualization experience Data Science experience EEO Statement Viasat is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, ancestry, physical or mental disability, medical condition, marital status, genetics, age, or veteran status or any other applicable legally protected status or characteristic. If you would like to request an accommodation on the basis of disability for completing this on-line application, please click here.
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Exp : 15Yrs to 20yrs Primary skill :- GEN AI Architect, Building GEN AI solutions, Coding, AI ML background, Data engineering, Azure or AWS cloud. Job Description : The Generative Solutions Architect will be responsible for designing and implementing cutting-edge generative AI models and systems. He / She will collaborate with data scientists, engineers, product managers, and other stakeholders to develop innovative AI solutions for various applications including natural language processing (NLP), computer vision, and multimodal learning. This role requires a deep understanding of AI/ML theory, architecture design, and hands-on expertise with the latest generative models. Key Responsibilities : GenAI application conceptualization and design: Understand the use cases under consideration, conceptualization of the application flow, understanding the constraints and designing accordingly to get the most optimized results. Deep knowledge to work on developing and implementing applications using Retrieval-Augmented Generation (RAG)-based models, which combine the power of large language models (LLMs) with information retrieval techniques. Prompt Engineering: Be adept at prompt engineering and its various nuances like one-shot, few shot, chain of thoughts etc and have hands on knowledge of implementing agentic workflow and be aware of agentic AI concepts NLP and Language Model Integration - Apply advanced NLP techniques to preprocess, analyze, and extract meaningful information from large textual datasets. Integrate and leverage large language models such as LLaMA2/3, Mistral or similar offline LLM models to address project-specific goals. Small LLMs / Tiny LLMs: Familiarity and understanding of usage of SLMs / Tiny LLMs like phi3, OpenELM etc and their performance characteristics and usage requirements and nuances of how they can be consumed by use case applications. Collaboration with Interdisciplinary Teams - Collaborate with cross-functional teams, including linguists, developers, and subject matter experts, to ensure seamless integration of language models into the project workflow. Text / Code Generation and Creative Applications - Explore creative applications of large language models, including text / code generation, summarization, and context-aware responses. Skills & Tools Programming Languages - Proficiency in Python for data analysis, statistical modeling, and machine learning. Machine Learning Libraries - Hands-on experience with machine learning libraries such as scikit-learn, Huggingface, TensorFlow, and PyTorch. Statistical Analysis - Strong understanding of statistical techniques and their application in data analysis. Data Manipulation and Analysis - Expertise in data manipulation and analysis using Pandas and NumPy. Database Technologies - Familiarity with vector databases like ChromaDB, Pinecone etc, SQL and Non-SQL databases and experience working with relational and non-relational databases. Data Visualization Tools - Proficient in data visualization tools such as Tableau, Matplotlib, or Seaborn. Familiarity with cloud platforms (AWS, Google Cloud, Azure) for model deployment and scaling. Communication Skills - Excellent communication skills with the ability to convey technical concepts to non-technical audiences.
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are currently seeking a Senior Python Developer to join our team for an exciting project that involves designing and building RESTful APIs for seamless communication between different components. In this role, you will be responsible for developing and maintaining microservices architecture using containerization tools such as Docker, AWS ECS, and ECR. Additionally, you will be required to demonstrate solutions to cross-functional teams and take ownership of the scope of work for successful project delivery. Responsibilities Develop and maintain microservices architecture using containerization tools such as Docker, AWS ECS, and ECR Design and build RESTful APIs for seamless communication between different components Present and organize demo sessions to demonstrate solutions to cross-functional teams Collaborate with cross-functional teams for successful project delivery Take ownership of the scope of work for successful project delivery Ensure consistency and scalability of applications and dependencies into containers Requirements 5-8 years of experience in software development using Python Proficient in AWS services such as Lambda, DynamoDB, CloudFormation, and IAM Strong experience in designing and building RESTful APIs Expertise in microservices architecture and containerization using Docker, AWS ECS, and ECR Ability to present and organize demo sessions to demonstrate solutions Excellent communication skills and ability to collaborate with cross-functional teams Strong sense of responsibility and ownership over the scope of work Nice to have Experience in DevOps tools such as Jenkins and GitLab for continuous integration and deployment Familiarity with NoSQL databases such as MongoDB and Cassandra Experience in data analysis and visualization using Python libraries such as Pandas and Matplotlib
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are currently seeking a Senior Python Developer to join our team for an exciting project that involves designing and building RESTful APIs for seamless communication between different components. In this role, you will be responsible for developing and maintaining microservices architecture using containerization tools such as Docker, AWS ECS, and ECR. Additionally, you will be required to demonstrate solutions to cross-functional teams and take ownership of the scope of work for successful project delivery. Responsibilities Develop and maintain microservices architecture using containerization tools such as Docker, AWS ECS, and ECR Design and build RESTful APIs for seamless communication between different components Present and organize demo sessions to demonstrate solutions to cross-functional teams Collaborate with cross-functional teams for successful project delivery Take ownership of the scope of work for successful project delivery Ensure consistency and scalability of applications and dependencies into containers Requirements 5-8 years of experience in software development using Python Proficient in AWS services such as Lambda, DynamoDB, CloudFormation, and IAM Strong experience in designing and building RESTful APIs Expertise in microservices architecture and containerization using Docker, AWS ECS, and ECR Ability to present and organize demo sessions to demonstrate solutions Excellent communication skills and ability to collaborate with cross-functional teams Strong sense of responsibility and ownership over the scope of work Nice to have Experience in DevOps tools such as Jenkins and GitLab for continuous integration and deployment Familiarity with NoSQL databases such as MongoDB and Cassandra Experience in data analysis and visualization using Python libraries such as Pandas and Matplotlib
Posted 1 month ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Senior ML Engineer Minimum 4 to 8+ years of experience in ML development in Product Based Company Location: Bangalore (Onsite) Why should you choose us? Rakuten Symphony is a Rakuten Group company, that provides global B2B services for the mobile telco industry and enables next-generation, cloud-based, international mobile services. Building on the technology Rakuten used to launch Japan’s newest mobile network, we are taking our mobile offering global. To support our ambitions to provide an innovative cloud-native telco platform for our customers, Rakuten Symphony is looking to recruit and develop top talent from around the globe. We are looking for individuals to join our team across all functional areas of our business – from sales to engineering, support functions to product development. Let’s build the future of mobile telecommunications together! Required Skills and Expertise: Candidate Must have exp. Working in Product Based Company. Should be able to Build, train, and optimize deep learning models with TensorFlow, Keras, PyTorch, and Transformers. Should have exp. In Manipulate and analyse large-scale datasets using Python, Pandas, Numpy, Dask Apply advanced fine-tuning techniques (Full Fine-Tuning, PEFT) and strategies to large language and vision models. Implement and evaluate classical machine learning algorithms using scikit-learn, statsmodels, XGBoost etc. Develop and deploy scalable APIs for ML models using FastAPI. Should have exp. In performing data visualization and exploratory data analysis with Matplotlib, Seaborn, Plotly, and Bokeh. Collaborate with cross-functional teams to deliver end-to-end ML solutions. Deploy machine learning models for diverse business applications over the cloud native and on-premise Hands-on experience with Docker for containerization and Kubernetes for orchestration and scalable deployment of ML models. Familiarity with CI/CD pipelines and best practices for deploying and monitoring ML models in production. Stay current with the latest advancements in machine learning, deep learning, and AI. Our commitment to you: - Rakuten Group’s mission is to contribute to society by creating value through innovation and entrepreneurship. By providing high-quality services that help our users and partners grow, - We aim to advance and enrich society. - To fulfill our role as a Global Innovation Company, we are committed to maximizing both corporate and shareholder value. RAKUTEN SHUGI PRINCIPLES: Our worldwide practices describe specific behaviours that make Rakuten unique and united across the world. We expect Rakuten employees to model these 5 Shugi Principles of Success. Always improve, always advance . Only be satisfied with complete success - Kaizen. Be passionately professional . Take an uncompromising approach to your work and be determined to be the best. Hypothesize - Practice - Validate - Shikumika. Use the Rakuten Cycle to success in unknown territory. Maximize Customer Satisfaction . The greatest satisfaction for workers in a service industry is to see their customers smile. Speed!! Speed!! Speed!! Always be conscious of time. Take charge, set clear goals, and engage your team.
Posted 1 month ago
0.0 - 3.0 years
0 Lacs
BTM Layout, Bengaluru, Karnataka
On-site
Job Title: Python Developer – Machine Learning & AI (2–3 Years Experience) Job Summary: We are seeking a skilled and motivated Python Developer with 2 to 3 years of experience in Machine Learning and Artificial Intelligence. The ideal candidate will have hands-on experience in developing, training, and deploying machine learning models, and should be proficient in Python and associated data science libraries. You will work with our data science and engineering teams to build intelligent solutions that solve real-world problems. Key Responsibilities: Develop and maintain machine learning models using Python. Work on AI-driven applications, including predictive modeling, natural language processing, and computer vision (based on project requirements). Collaborate with cross-functional teams to understand business requirements and translate them into ML solutions. Preprocess, clean, and transform data for training and evaluation. Perform model training, tuning, evaluation, and deployment using tools like scikit-learn, TensorFlow, or PyTorch. Write modular, efficient, and testable code. Document processes, models, and experiments clearly for team use and future reference. Stay updated with the latest trends and advancements in AI and machine learning. Required Skills: 2–3 years of hands-on experience with Python programming. Solid understanding of machine learning algorithms (supervised, unsupervised, and reinforcement learning). Experience with libraries such as scikit-learn , pandas , NumPy , Matplotlib , and Seaborn . Exposure to deep learning frameworks like TensorFlow , Keras , or PyTorch . Good understanding of data structures and algorithms. Experience with model evaluation techniques and performance metrics. Familiarity with Jupyter Notebooks, version control (Git), and cloud platforms (AWS, GCP, or Azure) is a plus. Strong analytical and problem-solving skills. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or related field. Experience with deploying ML models using Flask , FastAPI , or Docker . Knowledge of MLOps and model lifecycle management is an advantage. Understanding of NLP or Computer Vision is a plus. Job Type: Full-time Pay: Up to ₹700,000.00 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Ability to commute/relocate: BTM Layout, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Solid understanding of machine learning algorithms (supervised, unsupervised, and reinforcement learning). Experience with libraries such as scikit-learn, pandas, NumPy, Matplotlib, and Seaborn. Exposure to deep learning frameworks like TensorFlow, Keras, or PyTorch. Familiarity with Jupyter Notebooks, version control (Git), and cloud platforms (AWS, GCP, or Azure) is a plus. Experience with deploying ML models using Flask, FastAPI, or Docker. what is your CTC ( in lpa ) What is your Expected CTC ( in lpa ) what is your notice period Location: BTM Layout, Bengaluru, Karnataka (Required) Work Location: In person Application Deadline: 06/07/2025
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. About The Role We are looking for a Python Full Stack Developer with strong Azure DevOps and AI integration expertise to support the automation of Kanban workflows and real-time analytics in a scaled agile environment. You will design end-to-end automation for case management, build performance dashboards, and integrate AI-powered solutions using Azure OpenAI, Dataverse, and Power BI. The role requires a deep understanding of Python development, experience with Azure services, and the ability to collaborate with cross-functional teams to deliver high-quality solutions. Key Responsibilities Develop Python applications to automate Kanban case management integrated with Azure DevOps (ADO) Build and maintain REST APIs with access control for project and workload metrics Integrate Azure OpenAI services to automate delay analysis and generate custom summaries Design interactive dashboards using Python libraries (Pandas, Plotly, Dash) and Power BI Store, manage, and query data using Dataverse for workflow reporting and updates Leverage Microsoft Graph API and Azure SDKs for system integration and access control Collaborate with IT security, PMs, and engineering teams to gather requirements and deliver automation solutions Continuously improve security workflows, report generation, and system insights using AI and data modeling Required Skills & Experience 5+ years of Python development experience with FastAPI or Flask Hands-on experience with Azure DevOps, including its REST APIs Proficiency in Azure OpenAI, Azure SDKs, and Microsoft Graph API Strong understanding of RBAC (Role-Based Access Control) and permissions management Experience with Power BI, Dataverse, and Python data visualization libraries (Matplotlib, Plotly, Dash) Prior experience in Agile teams and familiarity with Scrum/Kanban workflows Excellent communication and documentation skills; able to explain technical concepts to stakeholders Benefits And Perks Opportunity to work with leading global clients Flexible work arrangements with remote options Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: pandas,azure sdks,microsoft graph api,rest apis,ai integration,microsoft power bi,power bi,dash,python,azure devops,fastapi,dataverse,rbac,azure openai,plotly,flask,matplotlib
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39973 Jobs | Dublin
Wipro
19601 Jobs | Bengaluru
Accenture in India
16747 Jobs | Dublin 2
EY
15791 Jobs | London
Uplers
11569 Jobs | Ahmedabad
Amazon
10606 Jobs | Seattle,WA
Oracle
9430 Jobs | Redwood City
IBM
9385 Jobs | Armonk
Accenture services Pvt Ltd
8587 Jobs |
Capgemini
7916 Jobs | Paris,France