Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Fusemachines Fusemachines is a leading AI strategy, talent, and education services provider. Founded by Sameer Maskey Ph.D., Adjunct Associate Professor at Columbia University, Fusemachines has a core mission of democratizing AI. With a presence in 4 countries (Nepal, United States, Canada, and Dominican Republic and more than 350 full-time employees). Fusemachines' AI educational program has made world-class AI education available, accessible and affordable to students around the world. Fusemachines seeks to bring its global expertise in AI to transform companies around the world. About the role: This is a full-time position. Job Description: We are seeking a talented and experienced Data Analyst with expertise in Power BI, and Azure public cloud provider to join our team. The Data Analyst will be responsible for gathering, interpreting, analyzing and visualizing large and complex datasets to provide insights and support data-driven decision-making within the organization. Responsibilities: Data Collection and modeling: Gathering data from various sources such as databases, spreadsheets, APIs, and other relevant sources to support business requirements Data Cleaning and Preprocessing: Reviewing and organizing data to ensure accuracy, consistency, and completeness. This may involve handling missing values, removing outliers, and transforming data into a suitable format for analysis Data Analysis: Applying statistical techniques and analytical methods to examine data and identify patterns, trends, relationships and insights that inform business decisions. This will involve using tools like SQL, Python or specialized data analysis software Data Visualization : Design, build and maintain visual representations of data through charts, graphs, and dashboards to communicate insights effectively to stakeholders. Data visualization tools like Tableau, Power BI, and Python libraries (e.g., Matplotlib, Seaborn) Reporting: Summarizing and presenting findings from data analysis in a clear and concise manner. This includes creating reports, slide decks, or presentations to communicate insights and recommendations to non-technical stakeholders Data Governance including Quality Assurance: Ensuring the accuracy, consistency, and integrity of data by performing quality checks and validation procedures. This involves identifying and resolving data discrepancies or errors Data Mining: Identifying patterns, trends, and correlations in large datasets to extract meaningful information and support business objectives. This may involve using techniques like clustering, classification, regression, or association analysis Statistical Analysis: Applying statistical methods and hypothesis testing to draw meaningful conclusions from data and make data-driven recommendations Identifying and implementing best practices for data visualization, reporting and analysis Collaborating with Teams: Working closely with cross-functional teams, such as business analysts, data engineers, and decision-makers, to understand their requirements, provide analytical support, identify key metrics and contribute to data-driven initiatives to solve business challenges Continuous Learning: Staying updated with industry trends, new analytical techniques, and tools to enhance data analysis capabilities and improve efficiency Requirements: Bachelor's or master's degree in a quantitative field such as statistics, mathematics, or computer science At least 5 years of experience in data analytics, with a focus on business intelligence and data visualization At least 3 years of experience using Power BI to design, build and maintain dashboards and reports Nice to have experience with Tableau Strong SQL skills and experience working with complex data sets and Enterprise Data Warehouse Experience with data modeling and schema design Strong analytical and problem-solving skills with the ability to translate complex data into actionable insights Excellent communication and collaboration skills with the ability to work effectively with cross-functional teams essential to convey complex technical concepts and insights to non-technical stakeholders effectively Demonstrated leadership experience with the ability to mentor and develop junior analysts Experience with data governance, data quality, and data integrity efforts Attention to Detail: Being meticulous and paying attention to detail is critical in data analysis. Small errors or inaccuracies can lead to misleading results, so data analysts should have a keen eye for detail and double-check their work Strong project management skills with the ability to manage multiple projects and priorities simultaneously If you are a data-driven individual with strong leadership skills and experience with PowerBI and Tableau we encourage you to apply for this exciting opportunity. Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status Powered by JazzHR uNHIqWCoUO
Posted 2 days ago
175.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. The AIM (Analytics, Investment & Marketing Enablement) team – a part of GCS Marketing– is the analytical engine that enables Global Commercial business portfolio of American Express. Accelerating growth momentum, increasing profitability, and powering up our value proposition are key objectives for this organization. The team enables GCS Marketing business by providing actionable insights to drive business strategy and growth. This Analyst (Band 30) role would be based in Gurgaon, India and would be focused on driving sentinel Analytics spanning across channels and product offerings from Amex. The incumbent will be responsible for driving innovative analytical solutions and strategies that helps in gaming prevention thereby driving profitable acquisitions for the commercial business. S/he will be challenged with designing and creating world class prospect marketing analytics by leveraging machine learning and advanced methodologies. A very important focus for the role shall be leveraging data science, quantitatively determining the value, deriving insights, and then assuring the insights are leveraged to create positive impact that cause a meaningful difference to the business. Key Responsibilities include: · Effectively engage and deliver results in a cross-functional collaborative environment, i.e., in partnership with key stakeholders including the functional channel owners, the business partners/end-users of data science solutions and technology partners. · Explore usage and implementation of data mining techniques, including regression analysis, clustering, and decision trees. · Exceptional execution skills – be able to resolve issues, identify opportunities, define success metrics, and make things happen. · Prioritizing efforts to help the team focus on the most impactful opportunities. · High degree of organization, individual initiative, and personal accountability. Qualifications: Hands on exposure to data science tools & techniques such as Big Data, PySpark, Hive, Scala, Python is required. Master’s degree in a quantitative field (e.g., Statistics, Engineering, Physics, Mathematics and Economics) or PhD is required. Ability to learn and quickly adapt around ever evolving analytics landscape is preferred. Proficiency & experience in applying cutting edge statistical and machine learning techniques to business problems and leverage external thinking (from academia and/or other industries) to develop best in class data science solutions. Strong communication and interpersonal skills, and ability to build and retain strong working relationships. Strong analytical/conceptual thinking acumen to solve business problems and articulate key findings to senior leaders/stakeholders in a succinct and concise manner. Ability to project-manage effectively, manage several concurrent projects through collaboration across teams/geographies. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 2 days ago
0.0 - 4.0 years
0 Lacs
Karnataka
On-site
Ecity - Pt Park, Karnataka, India Section D&AI - CX COE,TVSM Industrial,SST Job posted on Aug 06, 2025 Employee Type White Collar Experience range (Years) 0 - 0 Group Company: Tvs Motor Designation: Divisional Head - Data Science & Analytics Office Location: E-City, Bangalore Position description: About TVS / Who are we? TVS Motor Company is a reputed two and three-wheeler manufacturer globally, championing progress through Sustainable Mobility with four state-of-the-art manufacturing facilities in Hosur, Mysuru and Nalagarh in India and Karawang in Indonesia. Rooted in our 100-year legacy of Trust, Value, and Passion for Customers and Exactness, we take pride in making internationally aspirational products of the highest quality through innovative and sustainable processes. We are the only two-wheeler company to have received the prestigious Deming Prize. Our products lead in their respective categories in the J.D. Power IQS and APEAL surveys. We have been ranked No. 1 Company in /the J.D. Power Customer Service Satisfaction Survey for consecutive four years. Our group company Norton Motorcycles, based in the United Kingdom, is one of the most emotive motorcycle brands in the world. Our subsidiaries in the personal e-mobility space, Swiss E-Mobility Group (SEMG) and EGO Movement have a leading position in the e-bike market in Switzerland. TVS Motor Company endeavours to deliver the most superior customer experience across 80 countries in which we operate. For more information, please visit www.tvsmotor.com. Group Company: Norton Motorcycles Designation: Divisional Head - Data Science and Analytics About Norton: Norton is a premium motorcycle business operating out of UK. This global legacy brand has been revived after TVS Motor acquisition and is set for a global launch with a wide product range. It will offer premium concierge like service to its customers across global geographies and multiple distribution models. About the Role: Position description: The role cuts across data science and analytics projects across all customer journey stages of Discover, Shop, Buy and Own covering both business growth workstreams as well as Customer experience workstreams. Building and managing team of data scientists, analysts and BI developers and participation in end-to-end ML projects development and deployments that require feasibility analysis, design, development, validation, and application of state-of-the art data science solutions. Provide leadership to establish world-class ML lifecycle management processes. You will have to push the state of the art in terms of the application of NLP, Text Information Retrieval, Recommender Systems, Social Media Analytics, Object Detection and Tracking, and Voice AI solutions across customer experience projects. Leverage and enhance applications utilizing Deep Learning Neural networks for use cases including text mining, computer vision, speech, & voice to text AI. Partner with data/ML engineers and vendor partners for input data pipes development and ML models automation Need to deploy real time ML models, expose ML outputs through APIs, and deliver solutions to production through the software development lifecycle Develop and implement a comprehensive analytics strategy and problem solving framework to support the organization's business objectives. Guide the team to analyse data from various sources, including internal systems, third-party tools, and market research (as necessary), to identify trends, patterns, and opportunities. Collaborate with business stakeholders to define key performance indicators (KPIs) and develop dashboards and reports to track and measure performance against these metrics. Provide actionable insights and recommendations to stakeholders based on data analysis, helping them make informed decisions and drive business growth and customer satisfaction growth. Effectively communicate complex analytical findings and insights to both technical and non-technical stakeholders through presentations, reports, and visualizations. Recommend business improvement areas, product/feature development or further analysis requirements basis insights from various analyses. Drive Design of Experiments methodology for all improvements. Lead the organization towards self-serve BI and Gen AI based auto-insight generation. Work with Data COE and Digital Defense COE to drive data governance initiatives, data quality standards, ensuring data accuracy, and implementing data security measures. Collaborate with Tech COE to ensure data infrastructure, systems, and tools are optimized to support efficient data collection, storage, and analysis. Foster a culture of data-driven decision making within the organization, promoting the use of analytics to drive continuous improvement and innovation. A demonstrated ability to collaborate with internal and external stakeholders Mentoring team to write research papers, patents, & participate in open source contribution etc. Primary qualification Criteria: Over 10+ years of Applied Machine learning experience in the fields of machine learning such as Text Information Retrieval, Natural Language Processing (NLP), Recommender Systems, Deep learning, Optimization etc. Good to have experience in the domain of Computer Vision & Voice AI . 14 Years in analytics field. Post graduate/graduate in Maths, Computer Science, Statistics, Operation Research or related field with a minimum of 4 years of relevant experience or Masters in Math, Computer Science, Statistics, Operation Research or related field Solid understanding of Classification, Clustering, Association, Regression, Forecasting algorithms, Deep Learning (DL) techniques like convolutional neural networks (CNNs), recurrent neural networks (RNNs), long-term short-term memory (LSTM), & Artificial neural networks (ANNs) etc. Hands on experience in model building, validation, and productionizing Proven track record of delivering impactful insights and recommendations based on data analysis. Demonstrated experience in developing and implementing analytics strategies in a corporate environment. Expert Python Programmer, strong hold on SQL, extremely proficient with the SciPy stack (e.g. numpy, pandas, sci-kit learn, matplotlib) Experienced in ML lifecycle management and ML Ops tools & frameworks Proficient in Cloud Technologies and Service (Azure Databricks, ADF, Databricks MLflow) Proficient in communicating technical findings to non-technical stakeholders Experienced in publishing research papers/filing patents in the domain of AI Good to have experience in using any of existing platforms such as DialogFlow, Rasa etc. Functional competency: Strategic Thinking Detail Oriented Process improvement Behavioural competency: Business Acumen People Management Interpersonal relationship Educational qualifications preferred Category: Master's Degree Degree: Bachelor of Engineering-BE/B.TECH
Posted 3 days ago
11.0 years
0 Lacs
Hyderābād
On-site
Principal Software Engineer – Protocols About Nasuni Nasuni is a profitable, growing SaaS data infrastructure company reinventing enterprise file storage and data management in an AI-driven world. We power the data infrastructure of the world's most innovative enterprises. Backed by Vista Equity Partners, our engineers aren't working behind the scenes — they're building what's next with AI. Our platform lets businesses seamlessly store, access, protect, and unlock AI-driven insights from exploding volumes of unstructured file data. As an engineer here, you'll help build AI-powered infrastructure trusted by 900+ global customers, including Dow, Mattel, and Autodesk. Nasuni is headquartered in Boston, USA with offices in Cork-Ireland, London-UK and we are starting an India Innovation Center in Hyderabad India to leverage exuberant IT talent available in India. Company's recent Annual Revenue at $160M and is growing at 25% CAGR. We have a hybrid work culture. 3 days a week working from the Hyderabad office during core working hours and 2 days working from home. As a Principal Software Engineer - Protocols - at Nasuni, you will play a key role in enhancing our cloud-scale NAS platform. Your responsibilities will include: Participate and lead requirements analysis, architecture design, design reviews, and other work related to expanding Nasuni's Platform, Protocols, and Operating System. Developing and maintaining software and services that power our NAS appliance, delivering high performance and reliability to customers. Building and enhancing High Availability (HA) and Upgrade mechanisms to ensure seamless, non-disruptive customer experiences. Investigating and resolving bugs and defects reported by QA, customer support, and the development team. Required Skills and Experience 11+ years of experience building and operating large-scale, highly available distributed systems or cloud-based services. Proven expertise in C and C++ programming, with a strong focus on performance and reliability. Solid understanding of Linux clustering technologies such as Pacemaker, Corosync, etc. Proficient in object-oriented design and SDK development in both on-premises and cloud environments. Deep knowledge of data structures, algorithms, multi-threaded systems, I/O subsystems, and Linux internals including XFS/EXT filesystems. Strong grasp of operating systems, distributed systems architecture, and cloud service fundamentals. Experience working with hypervisor platforms such as ESX, Hyper-V, KVM, or OpenStack. Ability to work with technical partners to translate ambiguous requirements into well-defined, actionable designs and component-level specifications. Excellent written and verbal communication skills with the ability to clearly present complex technical topics to diverse audiences. Ability to lead technical implementation efforts, including rapid prototyping and delivery of proof-of-concept solutions. Demonstrated ability to collaborate and support team members, contributing to team knowledge around tools, technologies, and development best practices. The Storage Network Protocols we use include; NFS, SMB, CIFS, and SAMBA. It's an added bonus if you have... Computer Science degree or similar experience that includes System Design, Design Principals, Code Architecture. Our Cloud-Native Platform connects to enterprise applications and public storage via Software API's. Having prior API creation and utilization is essential. Experience with and contributions to open-source communities is a plus PostgreSQL is used as backbone to our system, prior Postgres is helpful. Exposure to Cloud Storage backend integration with AWS or Azure. Knowledge of containerization with Docker and Kubernetes Other high-level languages including Golang, Java, or Perl Why Work at Nasuni – Hyderabad? As part of our commitment to your well-being and growth, Nasuni offers competitive benefits designed to support every stage of your life and career: Competitive compensation programs Flexible time off and leave policies Comprehensive health and wellness coverage Hybrid and flexible work arrangements Employee referral and recognition programs Professional development and learning support Inclusive, collaborative team culture Modern office spaces with team events and perks Retirement and statutory benefits as per Indian regulations To all recruitment agencies: Nasuni does not accept agency resumes. Please do not forward resumes to our job boards, Nasuni employees or any other company location. Nasuni is not responsible for any fees related to unsolicited resumes. Nasuni is proud to be an equal opportunity employer. We are committed to fostering a diverse, inclusive, and respectful workplace where every team member can thrive. All qualified applicants will receive consideration for employment without regard to race, religion, caste, color, sex, gender identity or expression, sexual orientation, disability, age, national origin, or any other status protected by applicable laws in India or the country of employment. We celebrate individuality and are committed to building a workplace that reflects the diversity of the communities we serve. If you require accommodation during the recruitment process, please let us know This privacy notice relates to information collected (whether online or offline) by Nasuni Corporation and our corporate affiliates (collectively, "Nasuni") from or about you in your capacity as a Nasuni employee, independent contractor/service provider or as an applicant for an employment or contractor relationship with Nasuni.
Posted 4 days ago
3.0 years
0 Lacs
Hyderābād
On-site
Software Engineer – Protocols About Nasuni Nasuni is a profitable, growing SaaS data infrastructure company reinventing enterprise file storage and data management in an AI-driven world. We power the data infrastructure of the world's most innovative enterprises. Backed by Vista Equity Partners, our engineers aren't working behind the scenes — they're building what's next with AI. Our platform lets businesses seamlessly store, access, protect, and unlock AI-driven insights from exploding volumes of unstructured file data. As an engineer here, you'll help build AI-powered infrastructure trusted by 900+ global customers, including Dow, Mattel, and Autodesk. Nasuni is headquartered in Boston, USA with offices in Cork-Ireland, London-UK and we are starting an India Innovation Center in Hyderabad India to leverage exuberant IT talent available in India. Company's recent Annual Revenue at $160M and is growing at 25% CAGR. We have a hybrid work culture. 3 days a week working from the Hyderabad office during core working hours and 2 days working from home. The Position Nasuni is growing our Storage Network Protocols team and is seeking a Software Engineer with strong expertise in Linux/CentOS environments. This role involves designing and owning core technologies focused on high availability and non-disruptive upgrade mechanisms in distributed systems. The ideal candidate is passionate about building scalable, resilient storage solutions and thrives in a hands-on engineering environment. You'll contribute directly to critical system components and help shape the evolution of Nasuni's platform as it scales. As a Software Engineer at Nasuni, you will play a key role in enhancing our cloud-scale NAS platform. Your responsibilities will include: Collaborating on requirements analysis, design reviews to evolve Nasuni's core platform and operating system. Developing and maintaining software and services that power our NAS appliance, delivering high performance and reliability to customers. Building and enhancing High Availability (HA) and Upgrade mechanisms to ensure seamless, non-disruptive customer experiences. Investigating and resolving bugs and defects reported by QA, customer support, and the development team. Required Skills and Experience 3+ years of experience building and operating large-scale, highly available distributed systems or cloud-based services. Proven expertise in C and C++ programming, with a strong focus on performance and reliability. Solid understanding of Linux clustering technologies such as Pacemaker, Corosync, etc. Proficient in object-oriented design and SDK development in both on-premises and cloud environments. Deep knowledge of data structures, algorithms, multi-threaded systems, I/O subsystems, and Linux internals including XFS/EXT filesystems. Experience working with hypervisor platforms such as ESX, Hyper-V, KVM, or OpenStack. Excellent written and verbal communication skills with the ability to clearly present complex technical topics to diverse audiences. Demonstrated ability to collaborate and support team members, contributing to team knowledge around tools, technologies, and development best practices. It's an added bonus if you have... Computer Science degree or similar experience that includes System Design, Design Principals, Code Architecture. Our Cloud-Native Platform connects to enterprise applications and public storage via Software API's. Having prior API creation and utilization is essential. Experience with and contributions to open-source communities is a plus PostgreSQL is used as backbone to our system, prior Postgres is helpful. Exposure to Cloud Storage backend integration with AWS or Azure. Knowledge of containerization with Docker and Kubernetes Other high-level languages including Golang, Java, or Perl Experience: BE/B.Tech, ME/M.Tech in computer science (or) Electronics and Communications (or) MCA 3 to 5 years' previous experience in the industry. We are looking for experienced C++/Linux software engineer to expand our distributed file system team. To all recruitment agencies: Nasuni does not accept agency resumes. Please do not forward resumes to our job boards, Nasuni employees or any other company location. Nasuni is not responsible for any fees related to unsolicited resumes. Nasuni is proud to be an equal opportunity employer. We are committed to fostering a diverse, inclusive, and respectful workplace where every team member can thrive. All qualified applicants will receive consideration for employment without regard to race, religion, caste, color, sex, gender identity or expression, sexual orientation, disability, age, national origin, or any other status protected by applicable laws in India or the country of employment. We celebrate individuality and are committed to building a workplace that reflects the diversity of the communities we serve. If you require accommodation during the recruitment process, please let us know. This privacy notice relates to information collected (whether online or offline) by Nasuni Corporation and our corporate affiliates (collectively, "Nasuni") from or about you in your capacity as a Nasuni employee, independent contractor/service provider or as an applicant for an employment or contractor relationship with Nasuni.
Posted 4 days ago
0 years
6 - 9 Lacs
Hyderābād
On-site
Solenis is a leading global producer of specialty chemicals, delivering sustainable solutions for water-intensive industries, including consumer, industrial, institutional, food and beverage, and pool and spa water markets. Owned by Platinum Equity, our innovative portfolio includes advanced water treatment chemistries, process aids, functional additives, and state-of-the-art monitoring and control systems. These technologies enable our customers to optimize operations, enhance product quality, protect critical assets, and achieve their sustainability goals. At our Global Excellence Center (GEC) in Hyderabad, we support Solenis’ global operations by driving excellence in IT, analytics, finance, and other critical business functions. Located in the heart of the IT hub, the GEC offers a dynamic work environment with strong career development opportunities in a rapidly growing yet stable organization. Employees benefit from world-class infrastructure, including an on-campus gym, recreation facilities, creche services, and convenient access to public transport. Headquartered in Wilmington, Delaware, Solenis operates 69 manufacturing facilities worldwide and employs over 16,100 professionals across 130 countries. Recognized as a 2025 US Best Managed Company for the third consecutive year, Solenis is committed to fostering a culture of safety, diversity, and professional growth. For more information about Solenis, please visit www.solenis.com . We're Hiring: Business Support Administrator-1 Location: Hyderabad India – Hybrid Full-Time | Permanent Position What you need to be successful Data Warehousing Development Design and develop data warehouse schemas (star/snowflake schema). Build and manage Snowflake objects: databases, schemas, tables, views, stages, file formats, and sequences. Implement ELT/ETL pipelines for structured and semi-structured data (e.g., JSON, Avro, Parquet). Data Integration Integrate data from various sources (e.g., on-premise, cloud, third-party APIs). Use Snowpipe for real-time/continuous data ingestion. Work with tools like SQL Server, Coalesce, Informatica, Talend, dbt, Matillion, or Apache Airflow. Performance Optimization Optimize SQL queries and Snowflake virtual warehouses for performance and cost. Implement clustering keys, materialized views, result caching, and query profiling. Monitor and fine-tune auto-scaling and auto-suspend settings. Security and Governance Implement role-based access control (RBAC) and data masking policies. Ensure data encryption, privacy, and compliance with security policies. Set up audit logging and monitor user activity. Collaboration and Reporting Work closely with data analysts, engineers, and business teams to define data requirements. Provide data marts and data models to support dashboards and reporting tools (e.g., Tableau, Power BI, Looker). Automation and CI/CD Use Terraform, CloudFormation, or Snowflake CLI for infrastructure as code. Integrate Snowflake pipelines with GitHub, Jenkins, or other CI/CD tools. Some benefits of working with us Access to a huge array of internal and external training courses on our learning system (free) Access to self-paced language training (free) Birthday or wedding anniversary gift of INR 1500 Charity work once a year, to give back to the community Company car, phone if required for role Competitive health and wellness benefit plan Continuous professional development with numerous opportunities for growth Creche facility Employee Business Resource Groups (EBRGs) Electric car charging stations Hybrid work arrangement eg. 3 days in office Internet allowance No-meeting Fridays Parking on site (free) Relocation assistance available Staff hangout spaces, enjoy games like carrom, chess Transport by cab if working the midnight – 7am shift Well connected to public transport, only a 10 min walk to office We understand that candidates will not meet every single desired job requirement. If your experience looks a little different from what we’ve identified and you think you can bring value to the role, we’d love to learn more about you. Solenis is constantly growing. Come and grow your career with us. At Solenis, we understand that our greatest asset is our people. That is why we offer competitive compensation, and numerous opportunities for professional growth and development. So, if you are interested in working for a world-class company and enjoy solving complex challenges, consider joining our team.
Posted 4 days ago
12.0 years
7 - 11 Lacs
Bengaluru
Remote
AVEVA is creating software trusted by over 90% of leading industrial companies. Job Title: Principal Technologist, Dev - AI Core Services Development Location: Bangalore Employment Type: Regular, Full-Time Reports To: R&D Senior Manager We are looking for passionate and skilled Software Developers to join our Core AI Services team. In this role, you will help design, develop, and scale AI-enabling platform services and public APIs that are secure, reliable, and cloud-native. These services will act as foundational building blocks for AI adoption across AVEVA’s product portfolio and partner ecosystem. You will part of a Scrum team to build innovative, standards-compliant, secure and production-grade AI capabilities, with a builder mindset - rapid prototyping and continuous improvement with agility of a start-up. Key Responsibilities: Build scalable, fault-tolerant cloud-native services on Microsoft Azure, ensuring high performance and reliability. Develop secure, well-documented public APIs and SDKs for consumption by internal and external developers. Collaborate with cross-functional teams to deliver end-to-end solutions across data pipelines, orchestration, and service APIs. Embed robust security controls to protect sensitive data and ensure secure access to AI services. Contribute to design reviews, code reviews, and architectural discussions to ensure engineering excellence. Mentor junior developers, encourage continuous learning, and contribute to a culture of innovation. Work with multiple teams to create AI solutions, which include AI model deployment, training, and AI tooling development. AI & Cloud Expertise Experience working with Large Language Models (LLMs) and understanding of trade-offs between performance, cost, and capability. Understanding of Retrieval-Augmented Generation (RAG), agent orchestration, prompt engineering, and tool calling. Familiarity with AI standards such as Model Context Protocol (MCP) and Agent2Agent (A2A). Strong knowledge or experience in working with various ML algorithms (regression, classification, clustering, deep learning) Knowledge of AI ethics and regulations (e.g., NIST AI RMF, EU AI Act), and commitment to responsible AI development. Fluent in developing code using AI Tools such as GitHub Copilot. Must be able to use prompt engineering to carry out multiple development tasks. Familiar with AI orchestration, including tools like AI Foundry and/or Semantic Kernel. Experience with tools for automated testing and evaluation of AI outputs is a plus. Experience in Python and AI frameworks / tools such as PyTorch and TensorFlow. Core Skills and Qualifications: 12+ years of experience in software engineering, preferably in platform or cloud-native service development using Microsoft and .NET technologies. Hands-on experience with Microsoft Azure and associated PaaS services (e.g., Azure Functions, AKS, API Management). Strong expertise in RESTful API design, versioning, testing, and lifecycle management. Proficient in securing APIs, managing authentication/authorization and data privacy practices. Excellent problem-solving skills, with the ability to analyse complex technical challenges and propose scalable solutions. Experience working in Agile teams and collaborating across global R&D locations Demonstrated ability to mentor junior team members fostering a culture of continuous learning and innovation Demonstrated experience with AI frameworks, tools and Python R&D at AVEVA Our global team of 2000+ developers work on an incredibly diverse portfolio of over 75 industrial automation and engineering products, which cover everything from data management to 3D design. AI and cloud are at the centre of our strategy, and we have over 150 patents to our name. Our track record of innovation is no fluke – it’s the result of a structured and deliberate focus on learning, collaboration and inclusivity. If you want to build applications that solve big problems, join us. Find out more: aveva.com/en/about/careers/r-and-d-careers/ Why Join AVEVA? At AVEVA, we are unlocking the power of industrial intelligence to create a more sustainable and efficient world. AVEVA Connect platform is at the heart of that transformation. As a leader in Core AI Services, you will help shape how AI is delivered at scale across industries. Join us in driving the next wave of industrial innovation. India Benefits include: Gratuity, Medical and accidental insurance, very attractive leave entitlement, emergency leave days, childcare support, maternity, paternity and adoption leaves, education assistance program, home office set up support (for hybrid roles), well-being support It’s possible we’re hiring for this position in multiple countries, in which case the above benefits apply to the primary location. Specific benefits vary by country, but our packages are similarly comprehensive. Find out more: aveva.com/en/about/careers/benefits/ Hybrid working By default, employees are expected to be in their local AVEVA office three days a week, but some positions are fully office-based. Roles supporting particular customers or markets are sometimes remote. Hiring process Interested? Great! Get started by submitting your cover letter and CV through our application portal. AVEVA is committed to recruiting and retaining people with disabilities. Please let us know in advance if you need reasonable support during your application process. Find out more: aveva.com/en/about/careers/hiring-process About AVEVA AVEVA is a global leader in industrial software with more than 6,500 employees in over 40 countries. Our cutting-edge solutions are used by thousands of enterprises to deliver the essentials of life – such as energy, infrastructure, chemicals, and minerals – safely, efficiently, and more sustainably. We are committed to embedding sustainability and inclusion into our operations, our culture, and our core business strategy. Learn more about how we are progressing against our ambitious 2030 targets: sustainability-report.aveva.com/ Find out more: aveva.com/en/about/careers/ AVEVA requires all successful applicants to undergo and pass a drug screening and comprehensive background check before they start employment. Background checks will be conducted in accordance with local laws and may, subject to those laws, include proof of educational attainment, employment history verification, proof of work authorization, criminal records, identity verification, credit check. Certain positions dealing with sensitive and/or third-party personal data may involve additional background check criteria. AVEVA is an Equal Opportunity Employer. We are committed to being an exemplary employer with an inclusive culture, developing a workplace environment where all our employees are treated with dignity and respect. We value diversity and the expertise that people from different backgrounds bring to our business. AVEVA provides reasonable accommodation to applicants with disabilities where appropriate. If you need reasonable accommodation for any part of the application and hiring process, please notify your recruiter. Determinations on requests for reasonable accommodation will be made on a case-by-case basis.
Posted 4 days ago
3.0 years
6 - 7 Lacs
Chennai
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for contributing to the development and deployment of machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms as a part of a larger team. Contributes to translating application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Contributes to developing ways to use machine learning to solve problems and discover new products, working on a portion of the problem and collaborating with more senior researchers as needed. Works with moderate guidance in own area of knowledge. Job Description Core Responsibilities About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do: Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers, and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to support dynamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and exploration of emerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 3+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice to Have: Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts Certifications in cloud architecture, ML engineering, or data science specialization Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years
Posted 4 days ago
2.0 years
2 - 5 Lacs
India
On-site
Job description We are seeking a talented and motivated AI/ML Engineer to design, develop, and deploy machine learning models and AI-driven solutions that address complex challenges. The ideal candidate will collaborate with cross-functional teams to innovate, optimize processes, and deliver intelligent solutions that align with business goals. Key Responsibilities: Model Development: Design and implement machine learning models and algorithms for classification, regression, clustering, and recommendation systems. Build and train deep learning models for tasks like image recognition, NLP, and time-series forecasting. Data Processing: Gather, preprocess, and analyze structured and unstructured data. Ensure data quality, integrity, and scalability for machine learning pipelines. Deployment and Optimization: Deploy ML models to production using frameworks like TensorFlow, PyTorch, or Scikit-learn. Monitor, optimize, and fine-tune model performance. AI Integration: Integrate AI/ML models into applications, APIs, or cloud-based platforms. Collaborate with software developers and data engineers to implement solutions. Research and Innovation: Stay updated on the latest AI/ML trends, tools, and technologies. Conduct experiments to prototype innovative solutions. Collaboration: Work with product teams to understand business requirements and translate them into AI/ML solutions. Present findings, insights, and recommendations to stakeholders. Qualifications: Educational Background: Bachelor’s or Master’s in Computer Science, Data Science, Engineering, Mathematics, or a related field. Technical Skills: Proficiency in programming languages like Python, R, or Java. Experience with ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Knowledge of cloud platforms (AWS, Azure, Google Cloud) for AI/ML deployment. Familiarity with big data technologies (e.g., Hadoop, Spark). Experience: 2+ years of experience in AI/ML development or a related role. Strong understanding of algorithms, statistics, and data structures. Job Type: Full-time Pay: ₹20,000.00 - ₹45,000.00 per month Benefits: Paid sick time Work Location: In person Expected Start Date: 08/08/2025
Posted 4 days ago
1.0 years
1 - 1 Lacs
India
On-site
Our Culture & Values: We’d describe our culture as human, friendly, engaging, supportive, agile, and super collaborative. At Kainskep Solutions, our five values underpin everything we do, from how we work to how we delight and deliver to our customers. Our values are #TeamMember, #Ownership, #Innovation, #Challenge, and #Collaboration. What makes a great team? A Diverse Team! Don’t be put off if you don’t tick all the boxes; we know from research that candidates may not apply if they don’t feel they are 100% there yet; the essential experience we need is the ability to engage clients and build strong, effective relationships. If you don’t tick the rest, we would still love to talk. We’re committed to creating a diverse and inclusive. What you’ll bring: Use programming languages like Python, R, and SQL for data manipulation, statistical analysis, and machine learning tasks. Apply fundamental statistical concepts such as mean, median, variance, probability distributions, and hypothesis testing to analyze data. Develop supervised and unsupervised machine learning models, including classification, regression, clustering, and dimensionality reduction techniques. Evaluate model performance using metrics such as accuracy, precision, recall, and F1-score, implementing cross-validation techniques to ensure reliability. Conduct data manipulation and visualization using libraries such as Pandas, Matplotlib, Seaborn, and ggplot2, implementing data cleaning techniques to handle missing values and outliers. Perform exploratory data analysis, feature engineering, and data mining tasks including text mining, natural language processing (NLP), and web scraping. Familiarize yourself with big data technologies such as Apache Spark and Hadoop, understanding distributed computing concepts to handle large-scale datasets effectively. Manage relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra) for data storage and retrieval. Use version control systems like Git and GitHub/GitLab for collaborative development, understanding branching, merging, and versioning workflows. Demonstrate basic knowledge of the software development lifecycle, Agile methodologies, algorithms, and data structures. Requirements: Bachelor’s degree or higher in Computer Science, Statistics, Mathematics, or a related field. Proficiency in programming languages such as Python, R, and SQL. Strong analytical skills and a passion for working with data. Ability to learn quickly and adapt to new technologies and methodologies. Prior experience with data analysis, machine learning, or related fields is a plus. Good To Have: Experience in Computer Vision, including Image Processing and Video Processing. Familiarity with Generative AI techniques, such as Generative Adversarial Networks (GANs), and their applications in image, text, and other data generation tasks. Knowledge of Large Language Models (LLMs) is a plus. Experience with Microsoft AI technologies, including Azure AI Studio and Azure Copilot Studio. Job Type: Fresher Pay: ₹10,000.00 - ₹16,000.00 per month Benefits: Flexible schedule Ability to commute/relocate: Vaishali Nagar, Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data science: 1 year (Required) Work Location: In person
Posted 4 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are seeking a highly skilled and experienced Machine Learning Engineer to join our dynamic team. As a Machine Learning Engineer, you will be responsible for the design, development, deployment, and maintenance of machine learning models and systems that drive our [mention specific business area or product, e.g., recommendation engine, fraud detection system, autonomous vehicles]. You will work closely with data scientists, software engineers, and product managers to translate business needs into scalable and reliable machine learning solutions. This is a key role in shaping the future of and requires a strong technical foundation combined with a passion for innovation and problem-solving. Responsibilities Model Development & Deployment: * Design, develop, and deploy machine learning models using various algorithms (e.g., regression, classification, clustering, deep learning) to solve complex business problems. * Select appropriate datasets and features for model training, ensuring data quality and integrity. * Implement and optimize model training pipelines, including data preprocessing, feature engineering, model selection, and hyperparameter tuning. * Deploy models to production environments using containerization technologies (e.g., Docker, Kubernetes) and cloud platforms (e.g., AWS, GCP, Azure). * Monitor model performance in production, identify and troubleshoot issues, and implement model retraining and updates as needed. * Infrastructure & Engineering: * Develop and maintain APIs for model serving and integration with other systems. * Write clean, well-documented, and testable code. * Collaborate with software engineers to integrate models into existing products and services. * Research & Innovation: * Stay up-to-date with the latest advancements in machine learning and related technologies. * Research and evaluate new algorithms, tools, and techniques to improve model performance and efficiency. * Contribute to the development of new machine learning solutions and features. * Proactively identify opportunities to leverage machine learning to solve business challenges. * Collaboration & Communication: * Collaborate effectively with data scientists, software engineers, product managers, and other stakeholders. * Communicate technical concepts and findings clearly and concisely to both technical and non-technical audiences. * Participate in code reviews and contribute to the team's knowledge sharing. Qualifications Experience: 7+ years of experience in machine learning engineering or a related field. * Technical Skills: * Programming Languages: Proficient in Python and experience with other languages (e.g., Java, Scala, R) is a plus. * Machine Learning Libraries: Strong experience with machine learning libraries and frameworks such as scikit-learn, TensorFlow, PyTorch, Keras, etc. * Data Processing: Experience with data manipulation and processing using libraries like Pandas, NumPy, and Spark. * Model Deployment: Experience with model deployment frameworks and platforms (e.g., TensorFlow Serving, TorchServe, Seldon, AWS SageMaker, Google AI Platform, Azure Machine Learning). Databases: Experience with relational and NoSQL databases (e.g., SQL, MongoDB, Cassandra). * Version Control: Experience with Git and other version control systems. * DevOps: Familiarity with DevOps practices and tools. * Strong understanding of machine learning concepts and algorithms: Regression, Classification, Clustering, Deep Learning etc. * Soft Skills: * Excellent problem-solving and analytical skills. * Strong communication and collaboration skills. * Ability
Posted 4 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Name: Tableau Administrator Years of Experience: 5 Job Description: We are looking for an experienced Tableau Administrator to join our team. The ideal candidate will be responsible for managing, maintaining, and optimizing Tableau Server and Tableau Cloud environments to support business intelligence (BI) and data visualization initiatives. You will work closely with stakeholders to ensure secure and efficient data access, server performance, and user management. This role requires a deep understanding of Tableau Server administration, user security, and performance optimization within Tableau environments. Primary Skills: Tableau Admin Secondary Skills: SQL / PL SQL,Data & Analytics Concepts Role Description: The Tableau Administrator will be responsible for managing the Tableau infrastructure to ensure smooth performance, data accessibility, and user management. You will play a key role in the installation, configuration, and maintenance of Tableau Server, while supporting data visualization teams and ensuring scalability and security. In this role, you will work closely with data analysts, business intelligence (BI) developers, and IT teams to ensure that the Tableau environment is optimized and aligned with business needs. You will also assist in performance tuning and troubleshoot issues related to the Tableau server, licenses, and permissions. Role Responsibility: Tableau Server Management: Install, configure, and maintain Tableau Server in multi-environment setups (Dev, QA, Prod). Security & User Management: Administer user roles, permissions, and groups, and manage security settings, including SSO and LDAP configurations. Performance Optimization: Monitor and optimize server performance to ensure efficient data visualization and fast report load times. Data Integration & Management: Connect Tableau to various data sources, manage extract and refresh schedules, and troubleshoot connectivity issues. Backup, Recovery, and Maintenance: Manage server backups, disaster recovery plans, and perform regular maintenance tasks, including patching and upgrades. End-User Support: Provide technical support to Tableau users, including assistance with report publishing and dashboard troubleshooting. Documentation: Create and maintain documentation for server configurations, user management policies, and best practices. Training & Development: Conduct user training sessions and keep teams updated on Tableau Server features and best practices. Role Requirement: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of experience in Tableau administration, including hands-on experience with Tableau Server and Tableau Cloud. Strong expertise in Tableau Server configuration, maintenance, patching, and upgrades. Proven experience in performance tuning, load balancing, and security management. Proficiency in SQL, scripting (Python, Shell, PowerShell), and database management. Excellent troubleshooting skills and ability to monitor server health and performance metrics. Strong communication skills and experience working with cross-functional teams. Additional Requirement: Experience with multi-tenant environments, server clustering, and disaster recovery planning. Familiarity with advanced security protocols such as row-level security and role based access. Tableau certification is a plus
Posted 4 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics 1. Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. 2. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency 3. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous work experience required Proven experience of 3+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical skills required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer
Posted 4 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics 1. Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. 2. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency 3. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous work experience required Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical skills required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
We’re looking for a data-driven, analytical, and strategic SEO Manager with a proven track record of improving search engine rankings, organic traffic, and lead generation. The ideal candidate will lead the SEO strategy, execution, and performance analysis for our brand across on-page, off-page, and technical SEO pillars. Key Responsibilities (KRAs)1. Strategy & Planning Develop and own the end-to-end SEO roadmap aligned with overall business goals. Conduct regular competitor and market analysis to identify keyword and backlinking opportunities. Collaborate with content, tech, and performance teams to drive integrated SEO campaigns. 2. On-Page SEO Optimize content across landing pages, blogs, and core site pages (meta tags, header tags, internal linking, etc.). Implement keyword strategy, content clustering, and topic pillar planning. Manage SEO input for new website pages, microsites, or campaign landing pages. 3. Technical SEO Conduct regular technical audits using tools like Screaming Frog, SEMrush, Ahrefs, or Google Search Console. Optimize site speed, mobile responsiveness, crawlability, indexation, and structured data (schema). Collaborate with developers to implement SEO best practices in code and site architecture. 4. Off-Page SEO Plan and manage backlink building strategies—organic and outreach-based. Monitor and disavow toxic backlinks where necessary. Drive authority building through digital PR collaborations, guest posts, and directory submissions. 5. Content & Collaboration Provide keyword briefs, topic suggestions, and optimization guidelines to content teams. Review and optimize blog posts, product pages, and campaign creatives from an SEO lens. Ensure content supports both search intent and business objectives. 6. Reporting & Analysis Track and report weekly/monthly performance on KPIs: traffic, rankings, conversions, domain authority, etc. Monitor and adapt strategies based on changes in search engine algorithms. Set benchmarks and forecast growth potential through SEO. Key Performance Indicators (KPIs) Increase in organic traffic (MoM / QoQ) Growth in number of ranking keywords (top 3, top 10, top 100) Improvement in domain authority and backlink quality Website health scores (technical audit results) Increase in leads/conversions from organic search Reduction in bounce rate and exit rate from SEO pages Required Skills & Tools Strong understanding of search engine algorithms and ranking methods Tools: Google Analytics, Google Search Console, SEMrush / Ahrefs / Moz, Screaming Frog, Surfer SEO or similar Hands-on experience with CMS platforms (WordPress preferred) Understanding of HTML, CSS, JavaScript basics for SEO Experience in using AI tools for content and optimization is a plus Strong analytical skills, problem-solving mindset, and attention to detail
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Solenis is a leading global producer of specialty chemicals, delivering sustainable solutions for water-intensive industries, including consumer, industrial, institutional, food and beverage, and pool and spa water markets. Owned by Platinum Equity, our innovative portfolio includes advanced water treatment chemistries, process aids, functional additives, and state-of-the-art monitoring and control systems. These technologies enable our customers to optimize operations, enhance product quality, protect critical assets, and achieve their sustainability goals. At our Global Excellence Center (GEC) in Hyderabad , we support Solenis’ global operations by driving excellence in IT, analytics, finance, and other critical business functions. Located in the heart of the IT hub, the GEC offers a dynamic work environment with strong career development opportunities in a rapidly growing yet stable organization. Employees benefit from world-class infrastructure, including an on-campus gym, recreation facilities, creche services, and convenient access to public transport. Headquartered in Wilmington, Delaware, Solenis operates 69 manufacturing facilities worldwide and employs over 16,100 professionals across 130 countries . Recognized as a 2025 US Best Managed Company for the third consecutive year, Solenis is committed to fostering a culture of safety, diversity, and professional growth. For more information about Solenis, please visit www.solenis.com. 🚨 We're Hiring: Business Support Administrator-1 📍 Location: Hyderabad India – Hybrid 🕒 Full-Time | Permanent Position What You Need To Be Successful Data Warehousing Development Design and develop data warehouse schemas (star/snowflake schema). Build and manage Snowflake objects: databases, schemas, tables, views, stages, file formats, and sequences. Implement ELT/ETL pipelines for structured and semi-structured data (e.g., JSON, Avro, Parquet). Data Integration Integrate data from various sources (e.g., on-premise, cloud, third-party APIs). Use Snowpipe for real-time/continuous data ingestion. Work with tools like SQL Server, Coalesce, Informatica, Talend, dbt, Matillion, or Apache Airflow. Performance Optimization Optimize SQL queries and Snowflake virtual warehouses for performance and cost. Implement clustering keys, materialized views, result caching, and query profiling. Monitor and fine-tune auto-scaling and auto-suspend settings. Security and Governance Implement role-based access control (RBAC) and data masking policies. Ensure data encryption, privacy, and compliance with security policies. Set up audit logging and monitor user activity. Collaboration and Reporting Work closely with data analysts, engineers, and business teams to define data requirements. Provide data marts and data models to support dashboards and reporting tools (e.g., Tableau, Power BI, Looker). Automation and CI/CD Use Terraform, CloudFormation, or Snowflake CLI for infrastructure as code. Integrate Snowflake pipelines with GitHub, Jenkins, or other CI/CD tools. Some Benefits Of Working With Us Access to a huge array of internal and external training courses on our learning system (free) Access to self-paced language training (free) Birthday or wedding anniversary gift of INR 1500 Charity work once a year, to give back to the community Company car, phone if required for role Competitive health and wellness benefit plan Continuous professional development with numerous opportunities for growth Creche facility Employee Business Resource Groups (EBRGs) Electric car charging stations Hybrid work arrangement eg. 3 days in office Internet allowance No-meeting Fridays Parking on site (free) Relocation assistance available Staff hangout spaces, enjoy games like carrom, chess Transport by cab if working the midnight – 7am shift Well connected to public transport, only a 10 min walk to office We understand that candidates will not meet every single desired job requirement. If your experience looks a little different from what we’ve identified and you think you can bring value to the role, we’d love to learn more about you. Solenis is constantly growing. Come and grow your career with us. At Solenis, we understand that our greatest asset is our people. That is why we offer competitive compensation, and numerous opportunities for professional growth and development. So, if you are interested in working for a world-class company and enjoy solving complex challenges, consider joining our team.
Posted 4 days ago
0 years
0 Lacs
India
On-site
Key Skills We’re Looking For Machine Learning Mastery Strong experience with regression, classification, clustering, regularization, and model tuning. Familiarity with CNNs, RNNs, Transformers is a plus. Python Programming Proficiency in NumPy, Pandas, Scikit-learn. Ability to build ML models from scratch and write clean, production-grade code. SQL & Data Handling Able to query large datasets efficiently and apply rigorous data validation. Statistical & Analytical Thinking Solid grasp of hypothesis testing, confidence intervals, and interpreting results in business context. ML System Design Experience building ML pipelines and deploying models in production (REST APIs, batch jobs, etc.). Understanding of monitoring and retraining models post-deployment. 🚀 Bonus If You Have: Experience with distributed data processing (Spark, Dask, etc.) MLOps tools like MLflow, Airflow, or SageMaker Exposure to cloud platforms (AWS, Azure, GCP)
Posted 4 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a highly analytical and skilled Data Scientist to join our team. The ideal candidate will be adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. Responsibilities Data mining or extracting usable data from valuable data sources Utilize predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes Develop company A/B testing frameworks and test model quality Coordinate with cross-functional teams to implement models and monitor outcomes Conduct data-driven research and create models to understand complex behaviors and trends Present data insights and recommendations to senior management for strategic decision-making Develop and maintain prediction systems and machine learning algorithms Qualifications Bachelor's degree with minimum 3+ years of relevant experience Strong problem-solving skills with an emphasis on product development Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks Experience using statistical computer languages (R, Python, SQL, etc.) to manipulate data and draw insights from large data sets A drive to learn and master new technologies and techniques Experience working with and creating data architectures
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing… This role will be responsible for analyzing large, complex datasets and identifying meaningful patterns that lead to actionable recommendations. This role will be performing thorough testing and validation of models, and supporting various aspects of the business with data analytics. Responsible for the design, development, testing, deployment, maintenance, and improvement of machine learning system software. Leverages big data tools and programming frameworks to ensure that the raw data gathered from data pipelines are redefined as data science models that are ready to scale as needed. Asserts that production ML tasks are working properly and ensures that data science code is maintainable and scalable in terms of actual execution and scheduling, ensuring that data science code is maintainable and scalable. Uses experience, expertise and skills to make recommendations and solve problems that are more difficult and infrequent. Develop and deploy knowledge-based structures to be used by various teams, systems, and tools. Develop tools and processes to maintain, monitor, and enhance existing knowledge-based structures, ensuring their ongoing performance. Identify and implement various sources of data from across the company to further improve knowledge-based structures. Utilize feedback from users and reports to identify gaps and resolve issues. Work with clients to gather requirements and recommend possible solutions to business initiatives. What We Are Looking For… You'll need to have: Bachelor’s degree or four or more years of work experience. Four or more years of relevant machine learning experience. Experience in Machine learning algorithms including: forecasting, clustering, classification, reinforcement learning, deep learning etc. Experience in Spark and/or Python. Strong experience in Big Data Analysis. Experience with Generative AI and GPU solution optimization. Good mobile telecommunications industry knowledge, including experience with handset manufacturers, network equipment vendors, and/or chipset vendors. Ability to do statistical modeling, build predictive models and leverage machine learning algorithms, build efficient ML pipeline. Even better if you have one or more of the following: Strong communication skills to collaborate with cross-functional teams and explain technical concepts. Strong analytical capabilities, creativity and critical thinking. Excellent problem-solving skills and ability to debug complex issues across the entire stack. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. #TPDNONCDIO Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 4 days ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Salesforce Lightning Web Components Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems for your immediate team and across multiple teams. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute to key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the effort to design, build, and configure applications - Act as the primary point of contact for the project - Manage the team and ensure successful project delivery - Collaborate with multiple teams to make key decisions - Provide solutions to problems for the immediate team and across multiple teams Professional & Technical Skills: - Must To Have Skills: Proficiency in Salesforce Lightning Web Components - Strong understanding of statistical analysis and machine learning algorithms - Experience with data visualization tools such as Tableau or Power BI - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information: - The candidate should have a minimum of 5 years of experience in Salesforce Lightning Web Components - This position is based at our Hyderabad office - A 15 years full-time education is required
Posted 4 days ago
5.0 years
0 Lacs
Kochi, Kerala, India
On-site
🚀 We're Hiring: Data Analyst (5+ Years Experience) – Join Our New Kochi Office! We are a multinational company headquartered in the UAE, expanding to India with a new office in Infopark, Kochi—a leading IT hub in the region. This is an exciting opportunity to be part of our journey from the very beginning, contributing to the growth and success of our Indian operations. 🔹 Role: Data Analyst 🔹 Experience: 5+ years 🔹 Location: Kochi, Infopark 🔹 Industry: AdTech 🔹 Package: Excellent salary & benefits Job Overview: We are seeking a highly skilled Data Analyst with experience in Machine Learning to join our analytics team. In this role, you will leverage your expertise in programming, statistical modelling, and data visualization to extract insights from complex datasets, build predictive models, and enable data-driven decision-making. Key Responsibilities: Collect, clean, and analyze large datasets using SQL and Python. Build and implement machine learning models, including supervised and unsupervised learning techniques. Perform data segmentation, clustering, and predictive modelling to identify trends and business opportunities. Evaluate model performance using appropriate statistical and ML metrics to ensure reliability and accuracy. Develop and maintain Power BI dashboards for monitoring model performance and sharing insights with stakeholders. Collaborate with cross-functional teams to translate data insights into actionable business strategies. Ensure high data accuracy through validation and anomaly checks. Continuously explore opportunities to improve data processes and analytics frameworks . Qualifications: Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, or a related field . Strong proficiency in Python (pandas, NumPy, scikit-learn, stats models) for data manipulation and modelling. Advanced SQL skills for complex data extraction and transformation. Experience with machine learning techniques : Supervised & Unsupervised Learning Segmentation and Clustering Methods Predictive Modelling & Forecasting Strong understanding of model evaluation metrics (accuracy, precision, recall, ROC-AUC, etc.). Hands-on experience with Power BI for creating dashboards and data storytelling. Strong analytical mindset with excellent problem-solving and communication skills. Why Join Us? Work in a fast-paced, data-driven environment with real impact on growth. Be part of a new chapter in a fast-growing multinational company. Enjoy a competitive salary, excellent benefits, and hybrid work options Work in Infopark, Kochi, surrounded by top IT companies 📩 Apply Now! Send your resume to careers@firstscreen.com or DM us for more details. 🚀 Be part of something new. Grow with us! 🚀
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Management Level E Equiniti is a leading international provider of shareholder, pension, remediation, and credit technology. With over 6000 employees, it supports 37 million people in 120 countries. EQ India began its operations in 2014 as a Global India Captive Centre for Equiniti, a leading fintech company specialising in shareholder management. Within a decade, EQ India strengthened its operations and transformed from being a capability centre to a Global Competency Centre, to support EQ's growth story worldwide. Capitalising on India’s strong reputation as a global talent hub for IT / ITES, EQ India has structured the organisation to be a part of this growth story. Today, EQ India has evolved as an indispensable part of EQ Group providing critical fintech services to the US and UK. Role summary: The Senior Technical Architect is part of a team responsible for technical leadership, governance, and infrastructure designs for EQ projects. The role ensures that technical systems and infrastructure are designed to support business requirements, technical and security standards, and technology strategy. Applicants should have detailed knowledge of IT Infrastructure, covering public cloud platforms (AWS preferred) and on premises data centre solutions. Prior experience as a Technical Architect is essential, along with strong skills in engaging stakeholders, collaborating across a range of technical and business disciplines to agree solutions, and presenting technical proposals and designs to review boards. Core Duties/Responsibilities: Maintaining engagement with the wider Equiniti environment by creating and communicating standards, governance processes, approved architecture models, systems and technologies deployed and corporate and IT strategies. Work across a range of EQ projects including data centre to AWS migration, platform upgrades, and new product implementations. Act as a key resource in the project lifecycle, driving initiation, reviewing requirements, completing the infrastructure design, and providing technical oversight for implementation teams. Support project initiation by providing cost and complexity assessments, engaging with stakeholders, and helping to define the scope of activities. Review requirements and undertake discovery activities to propose technical solutions that meet business needs while meeting technical standards for quality, supportability, and cost. Produce high quality technical designs and support the creation of build documentation providing effective technical solutions to EQ business requirements. Participate in architecture design reviews and other technical governance forums across the organisation representing the infrastructure architecture team across multiple projects. Contribute to knowledge management by adding to and supporting the maintenance of infrastructure architecture artifact repositories. Contribute to the definition and maintenance of architectural, security and technical standards, reflecting evolving technology and emerging best practice. Promote improvements to processes and standards within architecture teams, and the wider technology function. Skills, Knowledge & Experience: Skilled communicator, comfortable engaging a range of stakeholders, and capable of understanding business requirements and translating them into technical solutions. Experience creating high quality multi-tiered infrastructure designs for new and existing application services in accordance with defined standards. Experienced at providing cost estimates for on-premises and public cloud solutions. Experience across a range of data centre technologies such as server, storage, networks, virtualisation solutions. Experience of designing infrastructure solutions for public cloud platforms (AWS/Azure). Experience of working with complex network topologies and familiarity with a range of network technologies across on-premises and cloud environments. A track record of successfully achieving project deadlines, budgets, and meeting quality standards. Technical certification and knowledge of architecture and delivery frameworks a distinct advantage (AWS / Azure Solution Architect, CCNA, M365, TOGAF, Prince2, Agile). Technical Ability: In depth experience of proposing and designing technical solutions in across a wide range of technologies in an Enterprise environment. Core Microsoft technologies such as: Active Directory, Exchange, Hyper-V, M365, SharePoint, SQL, Windows Server. Public cloud platforms such as Amazon Web Services and Microsoft Azure. Deployment, configuration management and monitoring systems such as Terraform, Puppet, and New Relic. High availability and load balancing including Microsoft clustering and hardware load balancers. Physical infrastructure such as data centres, server hardware, hypervisors, SAN storage solutions, and network infrastructure. Infrastructure security platforms, tooling, and vulnerability assessment. Secure File Transfer Platforms such as Progress MoveIT. Familiarity with designing solutions to support a range of commercially available and bespoke applications. We are committed to equality of opportunity for all staff and applications from individuals are encouraged regardless of age, disability, sex, gender reassignment, sexual orientation, pregnancy and maternity, race, religion or belief and marriage and civil partnerships. Please note any offer of employment is subject to satisfactory pre-employment screening checks.
Posted 4 days ago
0 years
0 Lacs
India
Remote
Job Details: Role: Solution Architect Employment Type: FTE with Vdart Digital Work Location: Remote Job Description: Solution Architect - Analytics (Snowflake) Role Summary: We are seeking a Solution Architect with a strong background in application modernization and enterprise data platform migrations. Key Responsibilities: Provide solution architecture leadership & strategy development. Engage with BAs to translate functional and non-functional requirements into solution designs. Lead the overall design, including detailed configuration. Collaborate with Enterprise Architecture. Conduct thorough reviews of code and BRDs to ensure alignment with architectural standards, business needs, and technical feasibility. Evaluate system designs, ensuring scalability, security, and performance while adhering to best practices and organizational guidelines. Troubleshoot and resolve technical challenges encountered during coding, integration, and testing phases to maintain project timelines and quality. Strong expertise in data warehousing and data modelling Excellent communication, collaboration and presentation skills. Experience with ETL/ELT tools and processes, building complex pipelines and data ingestion. SQL Skillset needed: Should be able to write Advanced SQL, complex joins. Subqueries - correlated /non correlated, CTEs. Window functions. Aggregations - Group by, rollup, cube, pivot. Snowflake Skilled needed: Should be able to understand and write UDFs and stored procedures in snowflake. Have good understanding of Snowflake architecture, clustering , micro partitions, caching, virtual warehouse, stages, storage , security(row and column level security). Knowledge of snowflake features (Streams, time -travel, zero copy cloning, Snowpark and tasks). Provide expert recommendations on frameworks, tools, and methodologies to optimize development efficiency and system robustness. Performance tuning within Snowflake (Performance bottlenecks, materialized views, search optimization) Solution design- Ability to architect scalable, cost-effective snowflake solutions. Cost management - Monitor and optimize Snowflake credit usage and storage costs.
Posted 4 days ago
4.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Overview: We are looking for a Senior Data Scientist with a strong foundation in machine learning, data analysis, and a growing expertise in LLMs and Gen AI. The ideal candidate will be passionate about uncovering insights from data, proposing impactful use cases, and building intelligent solutions that drive business value. Key Responsibilities: Analyze structured and unstructured data to identify trends, patterns, and opportunities. Propose and validate AI/ML use cases based on business data and stakeholder needs. Build, evaluate, and deploy machine learning models for classification, regression, clustering, etc. Work with LLMs and GenAI tools to prototype and integrate intelligent solutions (e.g., chatbots, summarization, content generation). Collaborate with data engineers, product managers, and business teams to deliver end-to-end solutions. Ensure data quality, model interpretability, and ethical AI practices. Document experiments, share findings, and contribute to knowledge sharing within the team Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or related field. 3–4 years of hands-on experience in data science and machine learning. Proficient in Python and ML libraries Experience with data wrangling, feature engineering, and model evaluation. Exposure to LLMs and GenAI tools (e.g., Hugging Face, LangChain, OpenAI APIs). Familiarity with cloud platforms (AWS, GCP, or Azure) and version control (Git). Strong communication and storytelling skills with a data-driven mindset.
Posted 4 days ago
7.5 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hello, Greetings From ZettaMine!!! Job Title: SAP BTP GenAI Hub Location: Chennai, India Experience Required: Minimum 7.5 years Job Summary: We are seeking a skilled and experienced Application Designer with a strong background in SAP BTP GenAI Hub. In this role, you will be responsible for defining functional requirements and designing scalable and efficient application solutions that align with business processes. You will collaborate with cross-functional teams and stakeholders, contributing to strategic decision-making and ensuring the successful implementation of applications across the organization. Key Responsibilities: Act as a Subject Matter Expert (SME) and lead design-related activities. Collaborate with stakeholders to gather and understand application requirements. Design and develop robust application solutions that meet business process needs. Provide leadership and support to the development team throughout the solution lifecycle. Contribute to architectural and design decisions across multiple teams. Drive innovation and continuous improvement in application design. Resolve complex technical issues, offering guidance and problem-solving support. Ensure timely and successful implementation of application solutions. Professional & Technical Skills: Must Have: Expertise in SAP BTP GenAI Hub with at least 7.5 years of hands-on experience. Strong understanding of statistical analysis and machine learning algorithms . Experience implementing models such as linear regression, logistic regression, decision trees , and clustering algorithms . Proficient in data visualization tools such as Tableau or Power BI . Proficient in data munging techniques including cleaning, transformation, and normalization. If you are interested in the above roles and responsibilities, please share your updated resume along with the following details to vishalanand.s@zettamine.com Name as per Aadhar card Mobile Number Alternative Mobile Mail ID Alternative Mail ID Date of Birth Total EXP Relevant EXP Current CTC ECTC Notice period(LWD) Updated resume Holding Offer(If any) Interview availability Any Career /Education Gap ZettaMine Payroll (Yes/No) Certifications(if yes please Mention) Thanks & Regards, Vishal Anand Senior Executive- TAG ZettaMine Labs Pvt Ltd. Mobile: +91 6302334827 Email: vishalanand.s@zettamine.com
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19405 Jobs | Bengaluru
Accenture in India
15976 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11281 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France