Jobs
Interviews

2210 Clustering Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

20 - 28 Lacs

Bengaluru

On-site

Job Purpose We are seeking a dynamic and skilled Data Scientist to join our analytics team. The ideal candidate will work collaboratively with data scientists, business stakeholders, and subject matter experts to deliver end-to-end advanced analytics projects that support strategic decision-making. This role demands technical expertise, strong problem-solving abilities, and the capability to translate business needs into impactful analytical solutions. Key Responsibilities -Collaborate with internal stakeholders to design and deliver advanced analytics projects. -Independently manage project workstreams with minimal supervision. Identify opportunities where analytics can support or improve business decision-making processes. -Provide innovative solutions beyond traditional analytics methodologies. -Apply strong domain knowledge and technical expertise to develop conceptually sound models and tools. -Mentor and guide junior team members in their professional development. -Communicate analytical findings effectively through clear and impactful presentations. Desired Skills & Experience Relevant Experience: 5+ years of analytics experience in Financial Services (Universal Bank/NBFC/Insurance), Rating Agencies, E-commerce, Retail, or Consulting. Exposure to areas like Customer Analytics, Retail Analytics, Collections & Recovery, Credit Risk Ratings, etc. Statistical & Modeling Expertise: Hands-on experience in techniques such as Logistic & Linear Regression, Bayesian Modeling, Classification, Clustering, Neural Networks, Non-parametric Methods, and Multivariate Analysis. Tools & Languages: Proficiency in R, S-Plus, SAS, STATA. Exposure to Python and SPSS is a plus. Data Handling: Experience with relational databases and intermediate SQL skills. Comfortable working with large datasets using tools like Hadoop, Hive, and MapReduce. Analytical Thinking: Ability to derive actionable insights from structured and unstructured data. Strong problem-solving mindset with the ability to align analytics with business objectives. Communication: Excellent verbal and written communication skills to articulate findings and influence stakeholders. Learning Orientation: Eagerness to learn new techniques and apply creative thinking to solve real-world business problems. Job Types: Full-time, Permanent Pay: ₹2,030,998.06 - ₹2,822,692.49 per year Benefits: Health insurance Provident Fund Schedule: Morning shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): How many years of experience in NBFC/BFSI domain? What's your Notice Period? Experience: Data science: 5 years (Required) Work Location: In person

Posted 3 weeks ago

Apply

1.0 - 2.0 years

2 - 4 Lacs

Bengaluru

On-site

Job Location: Bangalore (In-person, travel to partner schools within the city) Job Type: Full-time, Permanent Schedule: Day Shift About the Role: We are looking for an enthusiastic and knowledgeable AI & Python Educator to join our STEM education team. This role involves delivering structured, interactive lessons in Artificial Intelligence, Python Programming, and Machine Learning fundamentals for students from Grades 3 to 10 . The ideal candidate should have a solid background in Python, an interest in AI/ML, and a passion for teaching school students in an engaging, project-based format. Minimal electronics exposure (only basic awareness for AI-integrated projects like AIoT) is desirable, but not mandatory. Key Responsibilities: Curriculum Delivery: Teach structured, interactive lessons on Python Programming (basics to intermediate), AI concepts, Machine Learning fundamentals , and real-world AI applications tailored for school students. Hands-On AI Projects: Guide students through practical AI projects such as image classification, object detection, chatbots, text-to-speech systems, face recognition, and AI games using Python and AI tools like Teachable Machine, PictoBlox AI, OpenCV, and scikit-learn . Concept Simplification: Break down complex AI/ML concepts like data classification, regression, clustering, and neural networks into age-appropriate, relatable classroom activities. Classroom Management: Conduct in-person classes at partner schools, ensuring a positive, engaging, and disciplined learning environment. Student Mentoring: Motivate students to think logically, solve problems creatively, and build confidence in coding and AI-based projects. Progress Assessment: Track student progress, maintain performance reports, and share timely, constructive feedback. Technical Troubleshooting: Assist students in debugging Python code, handling AI tools, and resolving software-related queries during class. Stakeholder Coordination: Collaborate with school management and internal academic teams for lesson planning, scheduling, and conducting AI project showcases. Continuous Learning: Stay updated on the latest developments in AI, Python, and education technology through training and workshops. Qualifications: Education: Diploma / BCA / B.Sc / BE / MCA / M.Sc / M.Tech in Computer Science, AI/ML, Data Science, or related fields Experience: 1 to 2 years of teaching or EdTech experience in AI/ML, Python, or STEM education preferred Freshers with a strong Python portfolio and AI project experience are encouraged to apply Skills & Competencies: Strong knowledge of Python Programming (syntax, data structures, file handling, functions, OOPs) Practical understanding of AI & Machine Learning basics Hands-on experience with AI tools like Teachable Machine, PictoBlox AI extension, OpenCV, and scikit-learn Excellent communication, explanation, and classroom management skills Student-friendly, organized, and enthusiastic about AI education Basic knowledge of electronics (optional) — for AIoT integrations (eg. using sensors with AI models) Perks & Benefits: Structured in-house AI & Python training programs Opportunity to work with reputed schools across Bangalore Career growth in the AI & STEM education sector Salary & Employment Details: Salary: ₹20,000 to ₹35,000 per month (based on experience and performance) Job Type: Full-time, Permanent Work Location: In-person, travel to schools within Bangalore city Schedule: Day shift (Monday to Saturday) Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹35,000.00 per month Schedule: Day shift Experience: total work: 1 year (Preferred) Work Location: In person

Posted 3 weeks ago

Apply

5.0 years

4 - 6 Lacs

Bengaluru

On-site

Job Title: Senior AI Engineer Location: Bengaluru, India - (Hybrid) At Reltio®, we believe data should fuel business success. Reltio's AI-powered data unification and management capabilities—encompassing entity resolution, multi-domain master data management (MDM), and data products—transform siloed data from disparate sources into unified, trusted, and interoperable data. Reltio Data Cloud™ delivers interoperable data where and when it's needed, empowering data and analytics leaders with unparalleled business responsiveness. Leading enterprise brands—across multiple industries around the globe—rely on our award-winning data unification and cloud-native MDM capabilities to improve efficiency, manage risk and drive growth. At Reltio, our values guide everything we do. With an unyielding commitment to prioritizing our "Customer First", we strive to ensure their success. We embrace our differences and are "Better Together" as One Reltio. We are always looking to "Simplify and Share" our knowledge when we collaborate to remove obstacles for each other. We hold ourselves accountable for our actions and outcomes and strive for excellence. We "Own It". Every day, we innovate and evolve, so that today is "Always Better Than Yesterday". If you share and embody these values, we invite you to join our team at Reltio and contribute to our mission of excellence. Reltio has earned numerous awards and top rankings for our technology, our culture and our people. Reltio was founded on a distributed workforce and offers flexible work arrangements to help our people manage their personal and professional lives. If you're ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to enable digital transformation with connected data, let's talk! Job Summary: As a Senior AI Engineer at Reltio, you will be a core part of the team responsible for building intelligent systems that enhance data quality, automate decision-making, and drive entity resolution at scale. You will work with cross-functional teams to design and deploy advanced AI/ML solutions that are production-ready, scalable, and embedded into our flagship data platform. This is a high-impact engineering role with exposure to cutting-edge problems in entity resolution, deduplication, identity stitching, record linking, and metadata enrichment . Job Duties and Responsibilities: Design, implement, and optimize state-of-the-art AI/ML models for solving real-world data management challenges such as entity resolution, classification, similarity matching, and anomaly detection. Work with structured, semi-structured, and unstructured data to extract signals and engineer intelligent features for large-scale ML pipelines. Develop scalable ML workflows using Spark, MLlib, PyTorch, TensorFlow, or MLFlow , with seamless integration into production systems. Translate business needs into technical design and collaborate with data scientists, product managers, and platform engineers to operationalize models. Continuously monitor and improve model performance using feedback loops, A/B testing, drift detection, and retraining strategies. Conduct deep dives into customer data challenges and apply innovative machine learning algorithms to address accuracy, speed, and bias. Actively contribute to research and experimentation efforts, staying updated with latest AI trends in graph learning, NLP, probabilistic modeling , etc. Document designs and present outcomes to both technical and non-technical stakeholders , fostering transparency and knowledge sharing. Skills You Must Have: Bachelor's or Master's degree in Computer Science, Machine Learning, Artificial Intelligence , or related field. PhD is a plus. 5+ years of hands-on experience in developing and deploying machine learning models in production environments. Proficiency in Python (NumPy, scikit-learn, pandas, PyTorch/TensorFlow) and experience with large-scale data processing tools ( Spark, Kafka, Airflow ). Strong understanding of ML fundamentals , including classification, clustering, feature selection, hyperparameter tuning, and evaluation metrics. Demonstrated experience working with entity resolution, identity graphs, or data deduplication . Familiarity with containerized environments (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure) Strong debugging, analytical, and communication skills with a focus on delivery and impact. Attention to detail, ability to work independently, and a passion for staying updated with the latest advancements in the field of data science Skills Good to Have: Experience with knowledge graphs, graph-based ML, or embedding techniques . Exposure to deep learning applications in data quality, record matching, or information retrieval . Experience building explainable AI solutions in regulated domains. Prior work in SaaS, B2B enterprise platforms , or data infrastructure companies . Why Join Reltio?* Health & Wellness: Comprehensive Group medical insurance, including your parent,s with additional top-up options. Accidental Insurance Life insurance Free online unlimited doctor consultations An Employee Assistance Program (EAP) Work-Life Balance: 36 annual leaves, which include Sick leaves – 18, Earned Leaves - 18 26 weeks of maternity leave, 15 days of paternity leave Very unique to Reltio - 01 week of additional off as recharge week every year globally Support for home office setup: Home office setup allowance. Stay Connected, Work Flexibly: Mobile & Internet Reimbursement No need to pack a lunch—we've got you covered with a free meal. And many more….. Reltio is proud to be an equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. Reltio is committed to working with and providing reasonable accommodation to applicants with physical and mental disabilities.

Posted 3 weeks ago

Apply

2.0 - 4.0 years

3 - 10 Lacs

Bengaluru

On-site

About the role Enable data driven decision making across the Tesco business globally by developing analytics solutions using a combination of math, tech and business knowledge You will be responsible for Understands business needs and in depth understanding of Tesco processes Builds on Tesco processes and knowledge by applying CI tools and techniques. Responsible for completing tasks and transactions within agreed KPI's Solves problems by analyzing solution alternatives Engaging with business & functional partners to understand business priorities, ask relevant questions and scope same into a analytical solution document calling out how application of data science will improve decision making In depth understanding of techniques to prepare the analytical data set leveraging multiple complex data set sources Building Statistical models and ML algorithms with practitioner level competency Writing structured, modularized & codified algorithms using Continuous Improvement principles (development of knowledge assets and reusable modules on GitHub, Wiki, etc) with expert competency - Building easy visualization layer on top of the algorithms in order to empower end-users to take decisions - this could be on a visualization platform (Tableau / Python) or through a recommendation set through PPTs Working with the line manager to ensure application / consumption and proactively identifying opportunities to help the larger Tesco business with areas of improvement Keeping up-to-date with the latest in data science and retail analytics and disseminating the knowledge among colleagues You will need - 2 - 4 years experience in data science application in Retail or CPG Preferred Functional experience: Marketing, Supply Chain, Customer, Merchandising, Operations, Finance or Digital Applied Math: Applied Statistics, Design of Experiments, Regression, Decision Trees, Forecasting, Optimization algorithms, Clustering, NLP Tech: SQL, Hadoop, Spark, Python, Tableau, MS Excel, MS Powerpoint, GitHub Business: Basic understanding of Retail domain Soft Skills: Analytical Thinking & Problem solving, Storyboarding, Articulate communication Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation

Posted 3 weeks ago

Apply

0 years

5 - 9 Lacs

Bengaluru

Remote

Tasks At Daimler Truck, we change today’s transportation and create real impact together. We take responsibility around the globe and work together on making our vision become reality: Leading Sustainable Transportation. As one global team, we drive our progress and success together – everyone at Daimler Truck makes the difference. Together, we want to achieve a sustainable transportation, reduce our carbon footprint, increase safety on and off the track, develop smarter technology and attractive financial solutions. All essential, to fulfill our purpose - for all who keep the world moving. Become part of our global team: You make the difference - YOU MAKE US This team is core of Data & AI department for daimler truck helps developing world class AI platforms in various clouds(AWS, Azure) to support building analytics solutions, dashboards, ML models and Gen AI solutions across the globe. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Snowflake and other cloud-based tools. Implement data ingestion, transformation, and integration processes from various sources (e.g., APIs, flat files, databases). Optimize Snowflake performance through clustering, partitioning, and query tuning. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Ensure data quality, integrity, and security across all data pipelines and storage. Develop and maintain documentation related to data architecture, processes, and best practices. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Working experience with tools like medallion architecture, Matillion, DBT models, SNP Glu are highly recommended WHAT WE OFFER YOU Note: Fixed benefits that apply to Daimler Truck, Daimler Buses, and Daimler Truck Financial Services. Among other things, the following benefits await you with us: Attractive compensation package Company pension plan Remote working Flexible working models, that adapt to individual life phases Health offers Individual development opportunities through our own Learning Academy as well as free access to LinkedIn Learning + two individual benefits Job number: 3690 Publication period: 06/23/2025 - 06/24/2025 Location: Bangalore Organization: Daimler Truck Innovation Center India Private Limited Job Category: Finance/Controlling Working hours: Full time Benefits Inhouse Doctor Good public transport Parking Canteen-Cafeteria Barrier-free workplace To Location: Bengaluru, Daimler Truck Innovation Center India Private Limited Contact Sandip Kumar Mohanty Email: sandip.mohanty@daimlertruck.com

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Vadodara

On-site

Skills We are looking for a Candidate with experience managing and maintaining our organization's database systems, ensuring their optimal performance, security, and reliability. Key responsibilities include database deployment and management, backup and disaster recovery planning, performance tuning, and collaborating with developers to design efficient database structures. Proficiency in SQL, experience with major database management systems like Oracle, SQL Server, or MySQL and knowledge of cloud platforms such as AWS or Azure will be an added advantage. Job Location: Vadodara Office Hours: 09:30 am to 7 pm Experience: 6+ Years Role & Responsibilities Roles and Responsibilities: Design, implement, and maintain database systems. Optimize and tune database performance. Develop database schemas, tables, and other objects. Perform database backups and restores. Implement data replication and clustering for high availability. Monitor database performance and suggest improvements. Implement database security measures including user roles, permissions, and encryption. Ensure compliance with data privacy regulations and standards. Perform regular audits and maintain security logs. Diagnose and resolve database issues, such as performance degradation or connectivity problems. Provide support for database-related queries and troubleshooting. Apply patches, updates, and upgrades to database systems. Conduct database health checks and routine maintenance to ensure peak performance. Coordinate with developers and system administrators for database-related issues. Implement and test disaster recovery and backup strategies. Ensure minimal downtime during system upgrades and maintenance. Work closely with application developers to optimize database-related queries and code. Document database structures, procedures, and policies for team members and future reference. Requirements Education/Qualification (if any Certification): A bachelor's degree in IT, computer science or a related field. Requirements: Proven experience as a DBA or in a similar database management role. Strong knowledge of database management systems (e.g., SQL Server, Oracle, MySQL, PostgreSQL, etc.). Experience with performance tuning, database security, and backup strategies. Familiarity with cloud databases (e.g., AWS RDS, Azure SQL Database) is a plus. Strong SQL and database scripting skills. Proficiency in database administration tasks such as installation, backup, recovery, performance tuning, and user management. Experience with database monitoring tools and utilities. Ability to troubleshoot and resolve database-related issues effectively. Knowledge of database replication, clustering, and high availability setups.

Posted 3 weeks ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Reference # 312700BR Job Type Full Time Your role This is a role for an experienced SRE proficient in Windows Infrastructure and platforms who has sound technical skills and hands on experience in maintaining and improving all aspects of the Windows server environment for all flavors of Windows from 2012 through to Windows 2022 and Lead a team in an enterprise environment. You will Lead a team in an enterprise environment. The infrastructure environment is a hybrid cloud environment that includes an interdependent combination of Linux and Windows servers, Windows desktops, and high performance data and storage networks. will work with the SRE and Infrastructure engineering teams to improve the firm’s hybrid cloud infrastructure be involved in engineering project work and operational support to increase overall supportability and reliability of the firm’s enterprise technology environment. primarily focus on Windows Server, home grown orchestration and automation tools that are powered by PowerShell, Python and Azure Pipelines, virtualization and enterprise storage technologies but there will also be significant opportunities to work with the wider set of technology platforms in use at the firm. In addition, understand business priorities; adequately prioritize work accordingly to meet project objectives drive for improvements and implement them in regional or global scale communicate and collaborate with other internal partners for planning and coordination of implementation to ensure work is completed in a timely manner you will be expected to drive the execution Your team The hosting multi compute team is a global organization within the Technology Services team providing technology infrastructure platforms to underpin our partners business applications. You will be part of the Windows hosting team, which has a global footprint and works with clients and wider team members spread across the world. Together we drive consistency across business divisions and optimize operations and support costs. You'll be working with the global Windows SRE team. As an SRE Lead, you will be instrumental in maintaining and supporting the hybrid cloud Windows Server estate across the globe and will continuously look for ways to improve the management of these servers through automation. Your expertise  degree with 12+ years of experience in supporting and managing large scale Windows Server Deployments (2022, 2019,2016, 2012)  good knowledge of Windows Server, Azure integration services, and related technologies (e.g., Active Directory, RBAC, Azure Policy, failover clustering) is required, as is a working knowledge of networking concepts  hands on experience with Windows DHCP server administration and Scope migrations., Windows DFS (Both Standalone and Active Directory) and SMB, File Servers, Share and NTFS Permission structure.  experience supporting virtualization platforms (e.g., Hyper-V and VMware), enterprise storage platforms, database platforms (e.g., Microsoft SQL Server), and integration with cloud service providers (e.g., AWS, Azure, GCP) is also beneficial  understanding of PowerShell including an understanding of its underlying design, is required. Experience in Python and C# is preferred.  experience creating and maintaining CD Pipelines and IaC (Infrastructure as Code) deployments using Azure DevOps or GitLab  possess excellent verbal and written communication skills in English. Have worked in an agile setup and familiar with the Agile Manifesto  basic understanding of the Banking and Finance industries with previous job experience a plus. Exposure to enterprise environment ITIL based processes know-how , Issue Management and effective escalation management , Global User support exposure  familiar with Agile and SRE practices About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

8 - 10 Lacs

Noida

On-site

About Us - Attentive.ai is a leading provider of landscape and property management software powered by cutting-edge Artificial Intelligence (AI). Our software is designed to optimize workflows and help businesses scale up effortlessly in the outdoor services industry. Our Automeasure software caters to landscaping, snow removal, paving maintenance, and facilities maintenance businesses. We are also building Beam AI , an advanced AI engine focused on automating construction take-off and estimation workflows through deep AI. Beam AI is designed to extract intelligence from complex construction drawings, helping teams save time, reduce errors, and increase bid efficiency. Trusted by top US and Canadian sales teams, we are backed by renowned investors such as Sequoia Surge and InfoEdge Ventures." Position Description: As a Research Engineer-II, you will be an integral part of our AI research team focused on transforming the construction industry through cutting-edge deep learning, computer vision and NLP technologies. You will contribute to the development of intelligent systems for automated construction take-off and estimation by working with unstructured data such as blueprint, drawings (including SVGs), and PDF documents. In this role, you will support the end-to-end lifecycle of AI-based solutions — from prototyping and experimentation to deployment in production. Your contributions will directly impact the scalability, accuracy, and efficiency of our products. Roles & Responsibilities Contribute to research and development initiatives focused on Computer Vision, Image Processing , and Deep Learning applied to construction-related data. Build and optimize models for extracting insights from documents such as blueprints, scanned PDFs, and SVG files . Contribute development of multi-modal models that integrate vision with language-based features (NLP/LLMs). Follow best data science and machine learning practices , including data-centric development, experiment tracking, model validation, and reproducibility. Collaborate with cross-functional teams including software engineers, ML researchers, and product teams to convert research ideas into real-world applications. Write clean, scalable, and production-ready code using Python and frameworks like PyTorch , TensorFlow , or HuggingFace . Stay updated with the latest research in computer vision and machine learning and evaluate applicability to construction industry challenges. Skills & Requirements 3–5 years of experience in applied AI/ML and research with a strong focus on Computer Vision and Deep Learning . Solid understanding of image processing , visual document understanding, and feature extraction from visual data. Familiarity with SVG graphics , NLP , or LLM-based architectures is a plus. Deep understanding of unsupervised learning techniques like clustering, dimensionality reduction , and representation learning. Proficiency in Python and ML frameworks such as PyTorch , OpenCV , TensorFlow , and HuggingFace Transformers . Hands-on experience with model optimization techniques (e.g., quantization , pruning , knowledge distillation ). - Good to have Experience with version control systems (e.g., Git ), project tracking tools (e.g., JIRA ), and cloud environments ( GCP , AWS , or Azure ). Familiarity with Docker , Kubernetes , and containerized ML deployment pipelines. Strong analytical and problem-solving skills with a passion for building innovative solutions; ability to rapidly prototype and iterate. Comfortable working in a fast-paced, agile, startup-like environment with excellent communication and collaboration skills. Why Work With Us? Be part of a visionary team building a first-of-its-kind AI solution for the construction industry . Exposure to real-world AI deployment and cutting-edge research in vision and multimodal learning. Culture that encourages ownership, innovation, and growth. Opportunities for fast learning, mentorship, and career progression.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description: Responsibilities as Tableau Administrator Configure and maintain Tableau Server Software Layer. System Administration (includes site creation, server maintenance/Upgrades/patches). Change management including software, hardware upgrades, patches Monitor server activity/usage statistics to identify possible performance issues/enhancements Partner with business to design tableau KPI scorecards dashboards. Performance tuning / Server management of tableau server environment (clustering, Load balancing). Create/Manage Groups, Workbooks and Projects, Database Views, Data Sources and Data Connections. Proactively communicate with the Customer/Stakeholders to resolve issues and get work done. Set up a governance process around Tableau dashboard processes Create and host tableau extension API Location: This position can be based in any of the following locations: Chennai Current Guardian Colleagues: Please apply through the internal Jobs Hub in Workday

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

TCS has been a great pioneer in feeding the fire of Young Techies like you. We are a global leader in the technology arena and there's nothing that can stop us from growing together. TCS Hiring for Network Data Experience Range: - 06 To 8 Yrs Job Locations : Hyderabad & Kolkata Job Description 1.Experience in designing; supporting and implementing IP based networks for large enterprises. 2.Strong knowledge and experience with Cisco Nexus switches,ASRs, ISR and 9000 series ,4500, Catalyst switches etc. 3.Strong knowledge and experience with Cisco routing and switching protocols (ie. BGP,EIGRP, MPLS,QoS, STP, VTP etc.) 4.Configuring Cisco Wireless Access points & controllers and CISCO ISE. 5.Hand-ons Experience with CISCO firewalls and clustering. 6.Experience with analyzing traffic and utilizing packet sniffer utilities (ie. Wireshark, Netscout) 7.Familiarity with management tools such as Solar Winds,ITNM etc. 8.Expertise in LAN and WAN technologies to provide advanced troubleshooting and escalation support 9.Strong documentation skills and ability to create high-level and low-level designs that meet business requirements. 10.Switching and Routing on Cisco products 11.Configuring , troubleshooting of client to site and Site to Site VPNs . 12. SDN technologies like ACI , NSX , SD WAN Cisco Network Engineer Responsibilities: Analysing existing hardware, software, and networking systems. Creating and implementing scalable Cisco networks according to client specifications. Testing and troubleshooting installed Cisco systems. Resolving technical issues with networks, hardware, and software. Performing speed and security tests on installed networks. Applying network security upgrades. Upgrading/replacing hardware and software systems when required. Creating and presenting networking reports. Training end-users on installed Cisco networking products. Cisco Network Engineer Requirements: Bachelor's degree in computer science, networking administration, information technology, or a similar field. CCNA, CCNP certification. At least 5 years' experience as a network engineer. Detailed knowledge of Cisco networking systems. Experience with storage engineering, wide-area networking, and network virtualization. Advanced troubleshooting skills. Ability to identify, deploy, and manage complex networking systems. Good communication and interpersonal skills. Experience with end-user training. Minimum Qualification: 15 years of full-time education Disclaimer: We encourage you to register at www.tcs.com/careers for exploring an exciting career with TCS. At the time of your application to TCS, the personal data contained in your application and resume will be collected by TCS and processed for the purpose of TCS's recruitment related activities. Your personal data will be retained by TCS for as long as TCS determines it is necessary to evaluate your application for employment as per our retention policy. You have the right to request TCS for temporary/permanent exclusion of your candidature from any recruitment related communication. For any such request you may write to careers.tcs.com.

Posted 3 weeks ago

Apply

6.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

JD – AI/ML Engineer This is a full-time position with D Square Consulting Services Pvt Ltd Required Experience - 6-7 years Location – Bangalore Work mode - Onsite The candidate who can join within 30 days Job Summary The AI/ML Engineer will lead end-to-end design, development, and deployment of advanced, value-driven AI/ML solutions for digital marketing and analytics. Drive innovation, leverage cutting-edge techniques, and standardize multi-cloud model deployment through collaboration, delivering profound data insights. Required Qualifications- Bachelor’s degree (or higher preferred) in Computer Science, Data Science, ML, Mathematics, Statistics, Economics, or related fields with emphasis on quantitative methods. 6-7 years’ experience in software engineering with deep, hands-on expertise in the full lifecycle of ML model development, deployment, and operationalization. Demonstrated ability to write highly robust, efficient, and scalable Python, Java, Spark, and SQL code, adhering to industry best practices. Extensive experience with major ML frameworks (TensorFlow, PyTorch, Scikit-learn) and advanced deep learning libraries, including optimization. Strong, in-depth understanding of diverse ML algorithms (e.g., advanced regression, classification, clustering, RNNs, CNNs, transformers, time series, reinforcement learning), sophisticated data structures, and enterprise software design. Significant experience deploying and managing AI/ML models on major cloud platforms (Azure, AWS, GCP). Proven experience with LLMs and generative AI (fine-tuning, prompt engineering, deployment) is highly desirable. Exceptional problem-solving skills in a fast-paced, collaborative remote environment. Excellent communication and interpersonal skills, with the ability to effectively collaborate and influence diverse global teams remotely. Experience with classification, time series forecasting, customer lifetime value models, LLMs, and generative AI, preferably from the Retail, e-commerce, or CPG industry. Responsibilities- Lead cross-functional teams to deliver and scale complex AI/ML solutions (DL, NLP, optimization). Architect, design, develop, train, and evaluate high-performance, production-ready AI/ML models, ensuring scalability and robustness. Drive implementation, deployment, and maintenance of AI/ML solutions, optimizing inference, and documenting processes. Oversee data exploration, advanced preprocessing, complex feature engineering, and robust data pipeline development. Establish strategies for continuous testing, validation, and monitoring of deployed AI/ML models to ensure accuracy and reliability. Partner with senior stakeholders to translate business requirements into scalable AI solutions that deliver measurable value. Act as a primary SME on AI/ML model development, MLOps, and deployment, influencing global data science platforms. Continuously research and champion the adoption of the latest AI/ML technologies, algorithms, and best practices to maximize business value. Foster innovative thinking and continuous improvement, seeking superior ways of working for teams and partners.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We are committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a growing internal community and are committed to creating a workplace that looks like the world that we serve. Pay and Benefits: Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits, based on location Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact you will have in this role: The Database Administrator will provide database administrative support for all DTCC environments including Development, QA, client test, and our critical high availability production environment and DR Data Centers. Extensive knowledge on all aspects of MSSQL database administration and the ability to support other database platforms including both Aurora PostgreSQL and Oracle. This DBA will have a high level of impact in the generation of new processes and solutions, while operating under established procedures and processes in a critically important Financial Services infrastructure environment. The Ideal candidate will ensure optimal performance, data security and reliability of our database infrastructure. What You'll Do: Software Installation, configure and maintain Oracle server instances Implement and handle High availability solutions including Always ON availability groups and clustering. Support development, QA, PSE and Production environments using ServiceNow ticketing system. Review production performance reports for variances from normal operation. Optimize SQL queries and indexes for better efficiency. Analyze query and recommend the tuning strategies. Maintain database performance by calculating optimum values for database parameters; implementing new releases; completing maintenance requirements; evaluating computer operating systems and hardware products. Perform database backup and recovery strategy using tools like SQL server backup, log shipping and other technologies. Provide 3rd level support for DTCC critical production environments. Participate in root cause analysis for database issues. Prepare users by conducting training; providing information; and resolving problems. Maintains quality service by establishing and enforcing organizational standards. Setup and maintain database replication and clustering solutions. Maintains professional and technical knowledge by attending educational workshops; reviewing professional publications; establishing personal networks; benchmarking innovative practices; participating in professional societies. Will have shared responsibility for off-hour support. Maintain documentation on database configurations and procedures. Provide leadership and direction for the Architecture, Design, Maintenance and L1, L2 and L3 Level Support of a 7 x 24 global infrastructure. Qualifications: Bachelor's degree or equivalent experience Talents Needed for Success: Strong Oracle Experience of 19c, 21c and 22c. A minimum 4+ years of proven relevant experience in Oracle Solid experience in Oracle Database administration Strong knowledge in Python and Angular. Working knowledge of Oracle’s Golden gate Replication technology Demonstrate strong performance Tuning and Optimization skills in MSSQL, PostgreSQL and Oracle databases. Good Experience in High Availability and Disaster recovery (HA/DR) options for SQL server. Good experience in Backup and restore processes. Proficiency in power shell scripting for automation. Possess Good interpersonal skills and ability to coordinate with various stakeholders. Follow standard processes on Organizational change / Incident management / Problem management. Demonstrated ability to solve complex systems and database environment issues. Actual salary is determined based on the role, location, individual experience, skills, and other considerations. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 3 weeks ago

Apply

5.0 years

8 - 45 Lacs

Hyderabad, Telangana, India

On-site

Industry & Sector: Enterprise IT Infrastructure & Cloud Services in India. A fast-growing managed services provider delivers secure, high-availability virtualization platforms for Fortune 500 and digital-native businesses. About The Opportunity As a VMware Platform Engineer, you will design, deploy, and operate mission-critical virtualization estates on-site at our client facilities, ensuring performance, security, and scalability across private and hybrid clouds. Role & Responsibilities Engineer and harden vSphere, ESXi, and vSAN clusters to deliver 99.99% uptime. Automate build, patching, and configuration tasks using PowerCLI, Ansible, and REST APIs. Monitor capacity, performance, and logs via vRealize Operations and generate improvement plans. Lead migrations, upgrades, and disaster-recovery drills, documenting runbooks and rollback paths. Collaborate with network, storage, and security teams to enforce compliance and zero-trust policies. Provide L3 support, root-cause analysis, and mentoring to junior administrators. Skills & Qualifications Must-Have 5+ years hands-on with VMware vSphere 6.x/7.x in production. Expertise in ESXi host deployment, clustering, vMotion, DRS, and HA. Strong scripting with PowerCLI or Python for automation. Solid grasp of Linux server administration and TCP/IP networking. Experience with backup, replication, and DR tooling (Veeam, SRM, etc.). Preferred Exposure to vRealize Suite, NSX-T, or vCloud Director. Knowledge of container platforms (Tanzu, Kubernetes) and CI/CD pipelines. VMware Certified Professional (VCP-DCV) or higher. Benefits & Culture On-site, enterprise-scale environments offering complex engineering challenges. Continuous learning budget for VMware and cloud certifications. Collaborative, performance-driven culture with clear growth paths. Workplace Type: On-Site | Location: India Skills: automation,rest apis,vmware,vmware vsphere,ansible,backup and replication tools (veeam, srm),vsan,linux server administration,disaster recovery,powercli,tcp/ip networking,scripting,vrealize operations,vmware vsphere 6.x/7.x,platform engineers (vmware),esxi

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

On-site

Work closely with internal BU’s and business partners (clients) to understand their business problems and translate them into data science problems ▪ Design intelligent data science solutions that delivers incremental value the end stakeholders ▪ Work closely with data engineering team in identifying relevant data and pre-processing the data to suitable models ▪ Develop the designed solutions into statistical machine learning models, AI models using suitable tools and frameworks ▪ Work closely with the business intelligence team to build BI system and visualizations that delivers the insights of the underlying data science model in most intuitive ways possible. ▪ Work closely with application team to deliver AI/ML solutions as modular offerings. Skills/Specification ▪ Masters/Bachelor’s in Computer Science or Statistics or Economics ▪ At least 6 years of experience working in Data Science field and is passionate about numbers, quantitative problems ▪ Deep understanding of Machine Learning models and algorithms ▪ Experience in analysing complex business problems, translating it into data science problems and modelling data science solutions for the same ▪ Understanding of and experience in one or more of the following Machine Learning algorithms:- Regression , Time Series Logistic Regression, Naive Bayes, kNN, SVM, Decision Trees, Random Forest, k-Means Clustering etc. NLP, Text Mining LLM (GPTs) -OpenAI , Azure OpenAI, AWS Bed rock, Gemini, Llama, Deepseek etc (knowledge on fine tuning /custom training GPTs would be an add-on advantage). Deep Learning, Reinforcement learning algorithm ▪ Understanding of and experience in one or more of the machine learning frameworks - TensorFlow, Caffe, Torch etc. ▪ Understanding of and experience of building machine learning models using various packages in Python ▪ Knowledge & Experience on SQL, Relational Databases, No SQL Databases and Datawarehouse concepts ▪ Understanding of AWS/Azure Cloud architecture ▪ Understanding on the deployment architectures of AI/ML models (Flask, Azure function, AWS lambda) ▪ Knowledge on any BI and visualization tools is add-on (Tableau/PowerBI/Qlik/Plotly etc). ▪To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data scientists ▪ Desire for numbers and patterns

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Arctera Arctera keeps the world’s IT systems working. We can trust that our credit cards will work at the store, that power will be routed to our homes and that factories will produce our medications because those companies themselves trust Arctera. Arctera is behind the scenes making sure that many of the biggest organizations in the world – and many of the smallest too – can face down ransomware attacks, natural disasters, and compliance challenges without missing a beat. We do this through the power of data and our flagship products, Insight, InfoScale and Backup Exec. Illuminating data also helps our customers maintain personal privacy, reduce the environmental impact of data storage, and defend against illegal or immoral use of information. It’s a task that continues to get more complex as data volumes surge. Every day, the world produces more data than it ever has before. And global digital transformation – and the arrival of the age of AI – has set the course for a new explosion in data creation. Joining the Arctera team, you’ll be part of a group innovating to harness the opportunity of the latest technologies to protect the world’s critical infrastructure and to keep all our data safe. This position is with InfoScale (Data Resiliency) offering of the Arctera, which is software-defined storage and availability solution that helps organization manage information resiliency and protection across physical, virtual and cloud environments. It provides high availability and disaster recovery for mission critical applications. Responsibilities We are looking for candidates who have experience with storage and cloud technology for data resiliency solution. You should also have an eye for great design and a knack for pushing projects from conception all the way to customers. In this role, you will design and develop data protection solutions using the latest technologies. You will own product quality and overall customer experience. You will also propose technical solutions to product/service problems while refining, designing and implementing software components in line with technical requirements. The Sr. Software Engineer will productively work in a highly collaborative agile team, coach junior team members, actively participate in knowledge sharing all while communicating across teams in a multinational environment. Minimum Required Skills Include MS/BS in Computer Science/Computer Engineering or related field of study with 5+ years of relevant experience Full understanding of storage and cloud technologies, emerging standards and engineering best practices Strong communication skills, both oral and written Hands on experience in developing enterprise products with any of the programming language C/C++/Python/Go/RESTful APIs Hands on experience in developing Kubernetes custom controllers/operators and working knowledge of k8s orchestration platforms - OpenShift/EKS/AKS Designs, develops and maintains high quality code for product components, focusing on implementation. Solid knowledge of algorithms and design patterns Solid knowledge of clustering (HA-DR) concepts, systems programming Strong focus on knowledge and application of industry standard SDLC process including design, coding, debugging, and testing practices for large enterprise grade products is absolute must Designs, develops and maintains high quality code for product components, focusing on implementation. Solid knowledge of algorithms and design patterns Desired Skills Include Knowledge of operating Systems: LINUX/UNIX, Object Oriented Language· Agile Process Experience in developing CNI/CSI plugins/drivers’ development Experience with DevOps and tools (Prometheus, EFK, Helm, Red Hat Registry, Tiller, etc) related to Container technology Experience in Agile development methodologies including unit testing and TDD (test-driven development) Extra credit for Open-Source Contributions: active participation in CNCF SIGs, upstream contributions to K8S Ability to communicate and collaborate among cross-functional teams in a multinational environment

Posted 3 weeks ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

Remote

TCS has been a great pioneer in feeding the fire of young techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. What we are looking for Role: MQ Admin Experience Range: 8 – 12 Years Location: Pune/Bengaluru Must Have: 1) Administrate Websphere MQ v7.x,8.x, 9.x 2) Building non prod and prod qmgrs as per requirements from clients. 3) Ability to run MQSC commands and remote mq administration. 4) Very good knowledge on Distributed queuing and Clustering. 5) Good knowledge to use IBM MQ utilities like qload, runmqdlq, saveqmgr and tools like MQ explorer, Rfhutil 6) Support the clients in a 24x7 model 7) Linux and Solaris hands on knowledge is expected. 8) Knowledge to handle ITIL components such as IM, PM, CM 9) Knowledge on SSL certificates is a must 10) knowledge on MQ Migrations and Fix Pack Installation 11) Hands on client and server architecture . 12) Ability to support application team with their testing and deployments. Good to Have: 1) Good communication skills 2) Require good communication skills to talk to users, on understanding their requirements and able to provide solution. 3) Work experience on MQ Administration, Define MQ managers ,objects, troubleshoot all MQ issues. 4) Work experience on several MQ tools like IBM keyman tool, qload, Mo71, RfHutil, Mq explorer. 5) Strong in decision and problem solving skills 6) Knowledge on Unix/Perl scripting will be considered an advantage too 7) Knowledge on Networking, Firewall, and Unix based OS Good Production support experience. 8) Knowledge on PUB/SUB, HA, Clustering, Openshift, MQ clients on Fabric. 9) Flexible to work on shifts & provide coverage on weekends 10) Financial domain Knowledge . Essential: L2 Support activities on IBM MQ Implementing client requests and migrating to latest environments Closely work with L3 to implement their ideas and tasks. Maintaining prod env with latest ifixes and fixpacks Minimum Qualification: •15 years of full-time education •Minimum percentile of 50% in 10th, 12th, UG & PG (if applicable)

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We are committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a growing internal community and are committed to creating a workplace that looks like the world that we serve. Pay and Benefits: Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits, based on location Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact you will have in this role: The Database Administrator will provide database administrative support for all DTCC environments including Development, QA, client test, and our critical high availability production environment and DR Data Centers. Extensive knowledge on all aspects of MSSQL database administration and the ability to support other database platforms including both Aurora PostgreSQL and Oracle. This DBA will have a high level of impact in the generation of new processes and solutions, while operating under established procedures and processes in a critically important Financial Services infrastructure environment. The Ideal candidate will ensure optimal performance, data security and reliability of our database infrastructure. What You'll Do: Software Installation, configure and maintain SQL server instances (On-prem and cloud based) Implement and handle High availability solutions including Always ON availability groups and clustering. Support development, QA, PSE and Production environments using ServiceNow ticketing system. Review production performance reports for variances from normal operation. Optimize SQL queries and indexes for better efficiency. Analyze query and recommend the tuning strategies. Maintain database performance by calculating optimum values for database parameters; implementing new releases; completing maintenance requirements; evaluating computer operating systems and hardware products. Perform database backup and recovery strategy using tools like SQL server backup, log shipping and other technologies. Provide 3rd level support for DTCC critical production environments. Participate in root cause analysis for database issues. Prepare users by conducting training; providing information; and resolving problems. Maintains quality service by establishing and enforcing organizational standards. Setup and maintain database replication and clustering solutions. Maintains professional and technical knowledge by attending educational workshops; reviewing professional publications; establishing personal networks; benchmarking innovative practices; participating in professional societies. Will have shared responsibility for off-hour support. Maintain documentation on database configurations and procedures. Provide leadership and direction for the Architecture, Design, Maintenance and L1, L2 and L3 Level Support of a 7 x 24 global infrastructure. Qualifications: Bachelor's degree or equivalent experience Talents Needed for Success: Strong Oracle Experience of 19c, 21c and 22c. A minimum 4+ years of proven relevant experience in SQL Solid understanding in MSSQL Server and Aurora Postgres database Strong knowledge in Python and Angular. Working knowledge of Oracle’s Golden gate Replication technology Demonstrate strong performance Tuning and Optimization skills in MSSQL, PostgreSQL and Oracle databases. Good Experience in High Availability and Disaster recovery (HA/DR) options for SQL server. Good experience in Backup and restore processes. Proficiency in power shell scripting for automation. Possess Good interpersonal skills and ability to coordinate with various stakeholders. Follow standard processes on Organizational change / Incident management / Problem management. Demonstrated ability to solve complex systems and database environment issues. Actual salary is determined based on the role, location, individual experience, skills, and other considerations. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Remote | 3 Months) Company: INLIGHN TECH Location: Remote Duration: 3 Months Stipend (Top Performers): ₹15,000 Perks: Certificate | Letter of Recommendation | Hands-on Training About INLIGHN TECH INLIGHN TECH empowers students and recent graduates through hands-on, project-based internships. Our Data Science Internship is designed to enhance your technical skills while solving real-world data challenges, equipping you for the industry. Role Overview As a Data Science Intern , you’ll dive deep into real datasets, apply machine learning techniques, and generate insights that support informed decision-making. This internship provides the perfect launchpad for aspiring data professionals. Key Responsibilities Collect, clean, and preprocess data for analysis Apply statistical and machine learning techniques Build models for classification, regression, and clustering tasks Develop dashboards and visualizations using Python or Power BI Present actionable insights to internal stakeholders Collaborate with a team of peers on live data projects Requirements Currently pursuing or recently completed a degree in Computer Science, Data Science, Mathematics, or a related field Solid understanding of Python and key libraries: Pandas, NumPy, Scikit-learn Familiarity with machine learning algorithms and SQL Strong analytical and problem-solving abilities Eagerness to learn and grow in a fast-paced environment What You’ll Gain Real-world experience with industry-standard tools and datasets Internship Completion Certificate Letter of Recommendation for outstanding contributors Potential for full-time opportunities A portfolio of completed data science projects

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Cloud Engineering Exp:8-15yrs Location: Bangalore NP:30days Job Profile: Cloud Engineer - Clustering & Pacemaker Expertise Position Overview: We are seeking a highly skilled Cloud Engineer with specialized expertise in clustering technologies, particularly Pacemaker, along with strong knowledge of hyperscaler platforms (Azure, AWS, GCP, IBM Cloud). The ideal candidate will have deep experience working with Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) and the ability to design and implement high-availability solutions using Pacemaker clustering across multiple OS environments. This role is integral to ensuring the stability and availability of cloud-hosted applications, particularly for SAP workloads, and includes leading proof of concept (PoC) projects, as well as collaborating closely with development teams to prepare requirements and documentation. · Technical Skills : o In-depth knowledge and hands-on experience with Pacemaker clustering, including resource agents, fencing, and quorum management. o Strong understanding of cloud platforms (AWS, Azure, GCP, IBM Cloud) with experience in implementing HA solutions. o Experience in deploying and managing SAP systems (S/4 HANA, Netweaver, HANA) in a high-availability clustered environment. o Familiarity databases like HANA and DB2 (optional) in a clustered and HA setup. o A plus (optional): experience with automation tools (e.g., Ansible) · Soft Skills : o Excellent problem-solving and troubleshooting skills with a focus on complex high-availability configurations. o Strong ability to collaborate with cross-functional teams, providing technical leadership in clustered and high-availability solution design. o Fluent communication skills (verbal and written) in business English , with experience in preparing and presenting technical documentation and reports. Preferred Skills: · Advanced certifications in Pacemaker , Red Hat , SLES and SAP systems. · Experience in designing and implementing multi-cloud high availability architectures . · Experience : o Ideally multiple years of experience in clustering and high availability solutions , with a deep focus on Pacemaker and associated technologies. o Hands-on experience in cloud environments (AWS, Azure, GCP, IBM Cloud) with a focus on HA architecture and cloud-native services . o Extensive experience with Linux-based operating systems (RHEL, SLES), including basic knowledge of system administration and OS tuning in clustered environments. Key Responsibilities: 1. Clustering & High Availability Expertise: o Design and implement Pacemaker clustering solutions across multiple operating systems (RHEL, SLES) and platforms to ensure high availability and fault tolerance of critical applications databases and services. o Contribute to clustering architecture decisions, ensuring optimal scalability, and reliability of business-critical workloads. o Troubleshoot and resolve complex issues related to Pacemaker clusters, providing expert guidance on configuration, failover testing and cluster architecture. 2. Hyperscaler Experties: o Utilize extensive knowledge of hyperscaler platforms (Azure, AWS, GCP, IBM Cloud) to design, deploy, and maintain cloud high availability solutions o Implement and manage cloud-based high availability (HA) solutions in multi-cloud environments, leveraging cloud-native services alongside Pacemaker clustering for mission-critical applications. 3. Proof of Concepts & Architecture Documentation: o Lead and support for proof of concept (PoC) exercises to evaluate new clustering architectures and solutions, documenting the setup, configurations, and lessons learned. o Produce detailed technical documentation covering clustering configurations, architecture diagrams, and deployment processes, ensuring knowledge transfer, reproducibility and further automation. 4. SAP Solution Basis & Integration: o Leverage expertise in SAP Solution Basis , specifically SAP S/4 HANA, Netweaver, and HANA databases, to implement and optimize clustering solutions for SAP workloads in the cloud. o Ensure that SAP systems are properly integrated with Pacemaker clustering to meet high availability requirements. 5. Additional Database Knowledge (Optional): o Support DB2 and other database solutions in the context of clustering and high availability to ensure optimal database performance and uptime. o Collaborate with database teams to implement clustered databases that are tightly integrated with the cloud infrastructure and clustering technologies. 6. Collaboration & Requirement Preparation: o Work closely with development and business teams to define infrastructure requirements for clustered solutions. o Participate in detailed requirement analysis sessions with internal and external stakeholders to ensure clustering and HA solutions meet SAP ECS needs and requirements. 7. Communication & Reporting: o Fluent in business English, with the ability to communicate technical concepts clearly to both technical and non-technical stakeholders. o Prepare and deliver technical presentations , status reports, and documentation related to clustering solutions and cloud architectures. Qualifications: · Education : Bachelor’s degree in Computer Science, Information Technology, or a related field. Advanced certifications in cloud platforms (Azure, AWS, GCP) and clustering technologies (Pacemaker) are a plus.

Posted 3 weeks ago

Apply

9.0 years

0 Lacs

Kerala, India

Remote

Position : AI Architect -PERMANENT Only Experience : 9+ years (Relevant 8 years is a must) Budget : Up to ₹40–45 LPA Notice Period : Immediate to 45 days Key Skills : Python, Data Science (AI/ML), SQL Location - TVM/Kochi/remote Job Purpose Responsible for consulting for the client to understand their AI/ML, analytics needs & delivering AI/ML applications to the client. Job Description / Duties & Responsibilities ▪ Work closely with internal BU’s and business partners (clients) to understand their business problems and translate them into data science problems ▪ Design intelligent data science solutions that delivers incremental value the end stakeholders ▪ Work closely with data engineering team in identifying relevant data and pre-processing the data to suitable models ▪ Develop the designed solutions into statistical machine learning models, AI models using suitable tools and frameworks ▪ Work closely with the business intelligence team to build BI system and visualizations that delivers the insights of the underlying data science model in most intuitive ways possible. ▪ Work closely with application team to deliver AI/ML solutions as microservices Job Specification / Skills and Competencies ▪ Masters/Bachelor’s in Computer Science or Statistics or Economics ▪ At least 6 years of experience working in Data Science field and is passionate about numbers, quantitative problems ▪ Deep understanding of Machine Learning models and algorithms ▪ Experience in analysing complex business problems, translating it into data science problems and modelling data science solutions for the same ▪ Understanding of and experience in one or more of the following Machine Learning algorithms:-Regression , Time Series ▪ Logistic Regression, Naive Bayes, kNN, SVM, Decision Trees, Random Forest, k-Means Clustering etc. ▪ NLP, Text Mining, LLM (GPTs) ▪ Deep Learning, Reinforcement learning algorithm ▪ Understanding of and experience in one or more of the machine learning frameworks -TensorFlow, Caffe, Torch etc. ▪ Understanding of and experience of building machine learning models using various packages in one or more of the programming languages– Python / R ▪ Knowledge & Experience on SQL, Relational Databases, No SQL Databases and Datawarehouse concepts ▪ Understanding of AWS/Azure Cloud architecture ▪ Understanding on the deployment architectures of AI/ML models (Flask, Azure function, AWS lambda) ▪ Knowledge on any BI and visualization tools is add-on (Tableau/PowerBI/Qlik/Plotly etc). ▪To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data scientists ▪ Desire for numbers and patterns

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Skills We are looking for a Candidate with experience managing and maintaining our organization's database systems, ensuring their optimal performance, security, and reliability. Key responsibilities include database deployment and management, backup and disaster recovery planning, performance tuning, and collaborating with developers to design efficient database structures. Proficiency in SQL, experience with major database management systems like Oracle, SQL Server, or MySQL and knowledge of cloud platforms such as AWS or Azure will be an added advantage. Job Location: Vadodara Office Hours: 09:30 am to 7 pm Experience: 6+ Years Role & Responsibilities Roles And Responsibilities: Design, implement, and maintain database systems. Optimize and tune database performance. Develop database schemas, tables, and other objects. Perform database backups and restores. Implement data replication and clustering for high availability. Monitor database performance and suggest improvements. Implement database security measures including user roles, permissions, and encryption. Ensure compliance with data privacy regulations and standards. Perform regular audits and maintain security logs. Diagnose and resolve database issues, such as performance degradation or connectivity problems. Provide support for database-related queries and troubleshooting. Apply patches, updates, and upgrades to database systems. Conduct database health checks and routine maintenance to ensure peak performance. Coordinate with developers and system administrators for database-related issues. Implement and test disaster recovery and backup strategies. Ensure minimal downtime during system upgrades and maintenance. Work closely with application developers to optimize database-related queries and code. Document database structures, procedures, and policies for team members and future reference. Requirements Education/Qualification (if any Certification): A bachelor's degree in IT, computer science or a related field. Requirements: Proven experience as a DBA or in a similar database management role. Strong knowledge of database management systems (e.g., SQL Server, Oracle, MySQL, PostgreSQL, etc.). Experience with performance tuning, database security, and backup strategies. Familiarity with cloud databases (e.g., AWS RDS, Azure SQL Database) is a plus. Strong SQL and database scripting skills. Proficiency in database administration tasks such as installation, backup, recovery, performance tuning, and user management. Experience with database monitoring tools and utilities. Ability to troubleshoot and resolve database-related issues effectively. Knowledge of database replication, clustering, and high availability setups.

Posted 3 weeks ago

Apply

7.5 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Sales and Distribution (SD) Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems for your immediate team and across multiple teams. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute to key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the effort to design, build, and configure applications - Act as the primary point of contact for the project - Manage the team and ensure successful project delivery - Collaborate with multiple teams to make key decisions - Provide solutions to problems for the immediate team and across multiple teams Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Sales and Distribution (SD) - Strong understanding of statistical analysis and machine learning algorithms - Experience with data visualization tools such as Tableau or Power BI - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information: - The candidate should have a minimum of 7.5 years of experience in SAP Sales and Distribution (SD) - This position is based in Mumbai - A 15 years full-time education is required

Posted 3 weeks ago

Apply

7.5 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : ServiceNow IT Service Management Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems for your immediate team and across multiple teams. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute to key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the effort to design, build, and configure applications - Act as the primary point of contact for the project - Manage the team and ensure successful project delivery - Collaborate with multiple teams to make key decisions - Provide solutions to problems for the immediate team and across multiple teams Professional & Technical Skills: - Must To Have Skills: Proficiency in ServiceNow IT Service Management - Strong understanding of statistical analysis and machine learning algorithms - Experience with data visualization tools such as Tableau or Power BI - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information: - The candidate should have a minimum of 7.5 years of experience in ServiceNow IT Service Management - This position is based at our Bengaluru office - A 15 years full-time education is required

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Service Management Practitioner Project Role Description : Support the delivery of programs, projects or managed services. Coordinate projects through contract management and shared service coordination. Develop and maintain relationships with key stakeholders and sponsors to ensure high levels of commitment and enable strategic agenda. Must have skills : Microsoft Power Business Intelligence (BI) Good to have skills : Microsoft Power Apps Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Service Management Practitioner, you will support the delivery of programs, projects, or managed services. Coordinate projects through contract management and shared service coordination. Develop and maintain relationships with key stakeholders and sponsors to ensure high levels of commitment and enable strategic agenda. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Coordinate the delivery of programs, projects, or managed services. - Develop and maintain relationships with key stakeholders and sponsors. - Ensure high levels of commitment from stakeholders. - Enable strategic agenda through effective coordination. - Provide regular updates and reports on project progress. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Power Business Intelligence (BI). - Good To Have Skills: Experience with Microsoft Power Apps. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in Microsoft Power Business Intelligence (BI). - This position is based at our Chennai office. - A 15 years full-time education is required.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Operations Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : BE Summary: As an AI / ML Engineer, you will develop applications and systems that utilize AI to improve performance and efficiency, including deep learning, neural networks, chatbots, and natural language processing. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Implement machine learning models for various applications. - Optimize AI algorithms for improved performance. - Collaborate with cross-functional teams to integrate AI solutions. - Stay updated with the latest trends in AI and ML technologies. - Provide technical guidance and mentor junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Machine Learning Operations. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in Machine Learning Operations. - This position is based at our Kolkata office. - A BE degree is required.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies