Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
15 - 30 Lacs
Chennai, Coimbatore, Vellore
Work from Office
AI/ML Engineer to join our fast-growing. The ideal candidate will be responsible for designing, developing, and deploying machine learning models and AI-driven applications that deliver business value and customer impact. Key Responsibilities Research and develop machine learning models using supervised, unsupervised, or reinforcement learning techniques. Deploy models into production using MLOps best practices (e.g., containerization, CI/CD, model monitoring). Work on datasets involving structured and unstructured data (e.g., text, image, video, time-series). Collaborate with data engineers, product managers, and other stakeholders to define problem statements and deliver solutions. Continuously improve model performance and scalability using experimentation and model evaluation techniques. Stay updated with the latest trends and breakthroughs in AI/ML (e.g., LLMs, transformers, generative AI). Required Qualifications Bachelors/Masters/Ph.D. in Computer Science, Artificial Intelligence, Data Science, or a related field. Strong coding skills in Python (must), with knowledge of libraries like scikit-learn, PyTorch, TensorFlow, Hugging Face, OpenCV , etc. Experience building and tuning ML models from scratch. Solid understanding of ML lifecycle, from data preprocessing to model deployment. Hands-on with SQL , data manipulation tools ( Pandas , NumPy ), and visualization tools ( Matplotlib , Seaborn , Plotly ). Experience with cloud platforms (AWS/GCP/Azure) and tools like Docker, Kubernetes, MLflow is a plus. Preferred Skills Exposure to NLP , LLMs , chatbots , speech processing , or computer vision applications. Experience with GenAI models like GPT, DALL•E, or stable diffusion. Knowledge of prompt engineering, vector databases (like FAISS, Pinecone ), or LangChain is a bonus. Publications or participation in AI/ML competitions (e.g., Kaggle, NeurIPS, CVPR) is a strong plus.
Posted 20 hours ago
8.0 years
23 Lacs
Hyderābād
On-site
Job Title: Java Enterprise Technical Architect: Location: Hyderabad Notice: Immediate joiners required We are looking for a highly skilled Java Enterprise Technical Architect with deep expertise in microservices architecture, cloud computing, DevOps, security, database optimization, and high-performance enterprise application design . The ideal candidate will have hands-on experience in fixing VAPT vulnerabilities , suggesting deployment architectures , implementing clustering and scalability solutions , and ensuring robust application and database security . They must also be ready to code when needed , ensuring best practices in software development while leading architecture decisions. Responsibilities: Architecture Design & Deployment Define and implement scalable, high-performance microservices architecture. Design secure and efficient deployment architectures, including clustering, failover, and HA strategies. Optimize enterprise applications for Apache HTTP Server, ensuring security and reverse proxy configurations. Provide recommendations for cloud-native architectures on AWS, Azure, or GCP. Security & VAPT Compliance Fix all Vulnerability Assessment & Penetration Testing (VAPT) issues and enforce secure coding practices. Implement end-to-end security including API security, identity management (OAuth2, JWT, SAML), and encryption mechanisms. Ensure database security (Oracle/PostgreSQL) with encryption (TDE), access control (RBAC/ABAC), and audit logging. Deploy DevSecOps pipelines integrating SAST/DAST tools like SonarQube, OWASP ZAP, or Checkmarx. Performance Optimization & Scalability Fine-tune Oracle & PostgreSQL databases, including indexing, query optimization, caching, and replication. Optimize microservices inter-service communication using Kafka, RabbitMQ, or gRPC. Implement load balancing, caching strategies (Redis, Memcached, Hazelcast), and high availability (HA) solutions. DevOps & Cloud Enablement Implement CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCD. Optimize containerized deployments using Docker, Kubernetes (K8s), and Helm. Automate infrastructure as code (IaC) using Terraform or Ansible. Ensure observability with ELK Stack, Prometheus, Grafana, and distributed tracing (Jaeger, Zipkin). Technical Leadership & Hands-on Development Lead architecture decisions while being hands-on in coding with Java, Spring Boot, and microservices. Review and improve code quality, scalability, and security practices across development teams. Mentor developers, conduct training sessions, and ensure adoption of best practices in software engineering. Define architecture patterns, best practices, and coding standards to ensure high-quality, scalable, and secure applications. Collaborate with stakeholders, including business analysts, developers, and project managers, to ensure technical feasibility and alignment with business needs. Evaluate and recommend technologies, tools, and frameworks that best meet the project's needs. Oversee the integration of diverse technologies, platforms, and applications to ensure smooth interoperability. Ensure the security, performance, and reliability of system architecture through design and implementation. Review and optimize existing systems and architectures, identifying areas for improvement and implementing enhancements. Stay updated with emerging technologies, trends, and industry best practices to drive innovation. Conduct technical reviews, audits, and assessments of systems to ensure alignment with architecture and organizational standards. Experience: 8+ years of hands-on experience in Java full-stack, Spring Boot, J2EE, and microservices. 5+ years of expertise in designing enterprise-grade deployment architectures. Strong security background, with experience in fixing VAPT issues and implementing security controls. Network design and implementation Deep knowledge of Application servers, Apache HTTP Server, including reverse proxy, SSL, and load balancing. Proven experience in database performance tuning, indexing, and security (Oracle & PostgreSQL). Strong DevOps and Cloud experience, with knowledge of Kubernetes, CI/CD, and automation. Strong knowledge of cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes). Hands-on experience with microservices architecture, APIs, and distributed systems. Solid understanding of DevOps practices and CI/CD pipelines. Excellent problem-solving and analytical skills, with the ability to navigate complex technical challenges. Experience with databases (SQL and NoSQL) and data modelling. Effective communication and collaboration skills, with the ability to work closely with both technical and non-technical stakeholders. Ability to balance technical depth with an understanding of business requirements and project timelines. Education: Bachelor’s degree or master’s degree in computer science, Information Technology, or a related field Job Type: Full-time Pay: Up to ₹2,300,000.00 per year Benefits: Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Weekend availability Application Question(s): Are you an immediate Joiner? What is your CCTC and ECTC? Experience: java fullstack: 8 years (Required) Work Location: In person
Posted 20 hours ago
7.0 - 9.0 years
15 - 22 Lacs
Hyderābād
On-site
Role Description Role Description: As a Technical Lead - Data Analysis at Incedo, you will be responsible for analyzing and interpreting large and complex datasets to extract insights and identify patterns. You will work with data analysts and data scientists to understand business requirements and provide data-driven solutions. You will be skilled in data analysis tools such as Excel or Tableau and have experience in programming languages such as Python or R. You will be responsible for ensuring that data analysis is accurate, efficient, and scalable. Roles & Responsibilities: Analyzing and interpreting complex data sets using statistical and data analysis tools Developing and implementing data-driven insights and solutions Creating and presenting reports and dashboards to stakeholders Collaborating with other teams to ensure the consistency and integrity of data Providing guidance and mentorship to junior data analysts. Technical Skills Skills Requirements: Proficiency in data wrangling and data cleaning techniques using tools such as Python, R, or SQL. Knowledge of statistical analysis techniques such as hypothesis testing, regression analysis, or time-series analysis. Familiarity with data visualization and reporting tools such as Tableau, Power BI, or Looker. Understanding of data exploration and discovery techniques such as clustering, anomaly detection, or text analytics. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Nice-to-have skillsQualifications Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹2,200,000.00 per year Schedule: Day shift Experience: Python & SQL: 10 years (Preferred) Statistical analysis: 10 years (Preferred) Tableau : 10 years (Preferred) Power BI: 10 years (Preferred) Work Location: In person
Posted 20 hours ago
0 years
0 Lacs
Pune
On-site
Install, configure, maintain, and harden Linux servers (primarily RHEL/SLES) supporting SAP landscapes Perform day-to-day OS administration tasks including patching, tuning, disk/storage management, and backup coordination specifically for SAP systems. Work closely with SAP Basis team to support SAP kernel upgrades, system copies, and performance optimization activities. Manage Linux clusters and HA configurations for SAP HANA, Application Server (ASCS/ERS), and Central Services. Monitor and troubleshoot SAP-related infrastructure issues including OS-level logs, I/O bottlenecks, memory and CPU usage. Coordinate with network, storage, and backup teams for end-to-end infrastructure support. Support virtualization (VMware/KVM) and cloud infrastructure (AWS/Azure/GCP) hosting SAP workloads. Automate routine tasks and deployments using scripting (Bash, Python) or tools like Ansible. Maintain system documentation, inventory, and SOPs relevant to SAP infrastructure. Ensure compliance with IT security, audit, and data protection policies (especially for regulated industries). Experience in Commvault Backup Experience with SUSE Linux Enterprise Server (SLES) and/or Red Hat Enterprise Linux (RHEL). Familiarity with SAP Notes, PAM (Product Availability Matrix), and OS-level SAP tuning guidelines. Strong scripting skills (Bash/Python) and experience with automation/configuration tools (Ansible, Puppet). Knowledge of Linux-based HA clustering solutions for SAP (e.g., Pacemaker, SUSE HA Job Type: Full-time Pay: ₹1,000.00 - ₹1,500.00 per month Schedule: Day shift Work Location: In person
Posted 20 hours ago
3.0 years
2 - 5 Lacs
Mumbai
On-site
Ankura is a team of excellence founded on innovation and growth. Manage end-to-end eDiscovery lifecycle: data collection, processing, hosting, review, and production in accordance with EDRM. Perform Relativity administration tasks including workspace setup, user management, permissions, batching, and analytics configuration. Coordinate with internal teams and clients on search strategies, TAR, deduplication, clustering, and email threading workflows. Generate production sets for litigation or regulatory submissions in required formats (PDF, TIFF, Load Files, etc.). Fraud & Forensic Investigations Conduct financial fraud investigations, misconduct inquiries, and transactional reviews using structured and unstructured data. Analyze accounting records, bank transactions, emails, and digital logs to uncover red flags, patterns, or anomalies. Work closely with forensic accountants and lawyers to triangulate findings and support legal arguments. Assist in preparing chronologies, link charts, and evidence packs for client reports and potential legal proceedings. Digital Forensics Support Collaborate with the DFIR team on disk imaging, mobile extractions, metadata preservation, and chain-of-custody protocols. Help interpret forensic data logs (e.g., USB activity, internet history, file access patterns) to support investigations. Project Management & Reporting Independently manage projects or workstreams with minimal supervision and multiple stakeholders. Prepare concise and well-documented memos, reports, and PowerPoint deliverables suitable for legal audiences and senior management. Support business development efforts including proposals, capability decks, and client meetings. Education: Bachelor's degree in Accounting, Information Technology, Law, or related field. Preferred: Master’s degree in Forensics, Cybersecurity, or MBA with focus on Risk, Technology, or Legal domains. Certifications (preferred but not mandatory): Relativity Certified Administrator (RCA) or Relativity Review Specialist Certified Fraud Examiner (CFE) EnCE, ACE, or any forensics certification is a plus. Experience: 3 to 6 years of hands-on experience in eDiscovery and forensic/fraud investigations. Experience handling large datasets, complex search strategies, and case structuring on Relativity or similar platforms. Familiarity with forensic tools like Nuix, FTK, EnCase, Cellebrite, or log analysis platforms. Exposure to working with law firms, audit firms, or legal consulting companies in investigation or dispute resolution contexts. Technical & Analytical Skills: Proficient in Microsoft Excel, PowerPoint, and handling CSV/PST/MSG formats. Comfort with SQL, Power BI, Python (optional), or working with structured datasets a plus. Strong ability to connect financial, digital, and behavioral data to uncover fraud. Soft Skills: High level of attention to detail and strong documentation skills. Excellent verbal and written communication, especially when summarizing findings for legal teams or clients. Strong organizational skills and the ability to multi-task across concurrent engagements. A collaborative, curious, and problem-solving mindset with the ability to adapt to changing demands. #LI-JK1 Ankura is an Affirmative Action and Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against based on disability. Equal Employment Opportunity Posters, if you have a disability and believe you need a reasonable accommodation to search for a job opening, submit an online application, or participate in an interview/assessment, please email accommodations@ankura.com or call toll-free +1.312-583-2122. This email and phone number are created exclusively to assist disabled job seekers whose disability prevents them from being able to apply online. Only messages left for this purpose will be returned. Messages left for other purposes, such as following up on an application or technical issues unrelated to a disability, will not receive a response.
Posted 20 hours ago
6.0 - 9.0 years
25 - 30 Lacs
Pune
On-site
Role: ETL Data Engineer Experience: 6-9 Years Location: Noida/Pune/Chennai Notice Period: 0-15 Days Job Description: Collaborate with analysts and data architects to develop and test ETL pipelines using SQL and Python in Google BigQury. Perform related data quality checks and implement validation frameworks. Optimize BigQuery queries for performance and cost-efficiency Available for full-time engagement Technical Requirements SQL (Advanced level): Strong command of complex SQL logic, including window functions, CTEs, pivot/unpivot, and be proficient in stored procedure/SQL script development. Experience writing maintainable SQL for transformations. Python for ETL: Ability to write modular and reusable ETL logic using Python. Familiarity with JSON manipulation and API consumption. Google Big Query: Hands-on experience developing within the Big Query environment. Understanding of partitioning, clustering, and performance tuning. ETL Pipeline Development: Experienced in developing ETL/ELT pipelines, data profiling, validation, quality/health check, error handling, logging and notifications, etc. Nice-to-Have Skills: Experiences with Google Big Query platform. Knowledge of CI/CD practices for data workflows. Job Types: Full-time, Permanent Pay: ₹2,500,000.00 - ₹3,000,000.00 per year Benefits: Cell phone reimbursement Health insurance Paid time off Provident Fund Schedule: Day shift Work Location: In person
Posted 20 hours ago
0 years
0 Lacs
India
On-site
Job Title: Technical Internship – Programmer (AI/ML Focus) Location: Coimbatore, Tamil Nadu Company: Angler Technologies Duration: 3 to 6 Months (with potential for full-time conversion) Eligibility: Recent graduates or final year students from Engineering (B.E/B.Tech) or Arts & Science (B.Sc, BCA, M.Sc, MCA) with specialization in Artificial Intelligence and Machine Learning Job Description: We are looking for enthusiastic and innovative AI/ML Interns to join our technical team at Angler Technologies. This is a unique opportunity for fresh graduates to work on real-world AI/ML projects, build intelligent systems, and gain hands-on experience in programming, data science, and modern AI frameworks. Key Responsibilities: Assist in the development, training, and deployment of Machine Learning and AI models Work with data collection, preprocessing, and annotation tasks Collaborate with the product and software teams to integrate AI/ML features into applications Support documentation, testing, and debugging of code modules Participate in code reviews and knowledge-sharing sessions Required Skills & Qualifications: Strong foundation in programming (python, asp.net, HTML, CSS, JavaScript preferred) Understanding of basic AI/ML concepts (classification, regression, clustering, NLP, etc.) Analytical thinking and problem-solving mindset Good communication and teamwork skills Eagerness to learn and apply new technologies Preferred Background: B.E/B.Tech (AI/ML) B.Sc/BCA/M.Sc/MCA with specialization in AI/ML/Data Science Academic or hobby projects in AI/ML will be a bonus Perks & Benefits: Stipend based on performance and project contributions Hands-on training and mentorship Opportunity to work on live projects Certificate of Internship & Letter of Recommendation Potential for full-time employment post internship Job Type: Internship Benefits: Flexible schedule Paid sick time Paid time off Provident Fund Schedule: Day shift Work Location: In person
Posted 20 hours ago
10.0 years
4 - 5 Lacs
Bengaluru
On-site
Job Title Senior Database Administrator Job Description We are seeking a Senior Database Administrator with deep expertise in AWS cloud data services and strong experience supporting healthcare-grade systems . This role will be responsible for managing both on-premises and cloud-based database environments, ensuring high availability, data security, regulatory compliance, and optimal performance. The ideal candidate will be self-driven, technically skilled, and collaborative, with a proven track record in supporting mission-critical data environments in regulated industries. Your role: Database Management Administer AWS RDS, Aurora, DynamoDB , and other AWS-managed database services for a 24x7 production environment. Maintain and support legacy on-premise SQL Server environments and coordinate migrations to newer versions and cloud platforms. Monitor and manage SQL Agent jobs , troubleshoot job failures, and maintain operational continuity. Perform regular maintenance tasks, including backups, patching, schema updates, and deployments . Automate administrative and monitoring tasks through scripts and infrastructure-as-code solutions . Data Security, Availability & Compliance Implement and maintain database security policies, access controls, encryption, and auditing to support healthcare data compliance (e.g., HIPAA). Design and support disaster recovery and high-availability solutions , ensuring alignment with business continuity plans and SLAs . Enforce robust change management and security standards across development, staging, and production environments. Ensure ongoing compliance with healthcare data regulations , including data retention and protection requirements. Troubleshooting & Operational Support Diagnose and resolve database performance and connectivity issues proactively. Provide incident support and root cause analysis for database-related service disruptions. Collaborate with DevOps, application, and infrastructure teams to support and improve end-to-end performance. Database Design & Optimization Participate in the design, normalization, and optimization of database schemas, indexes, and stored procedures. Implement and manage replication, clustering , and failover configurations for high availability and scalability. Conduct capacity planning and make strategic recommendations to ensure system performance under growing workloads. Support development teams with guidance on database best practices during architecture and review phases. You're the right fit if: Bachelor's degree in Computer Science, Information Systems, or a related discipline. 10+ years of hands-on database administration experience, including 3 + years with AWS database services . AWS Certifications such as AWS Certified Database – Specialty , SysOps Administrator , or Cloud Practitioner are preferred. Expert knowledge of SQL Server 2008 R2 to 2019+ , with migration experience to latest platforms. Proficient in AWS RDS, Aurora, DynamoDB , and AWS shared responsibility model. Strong expertise in T-SQL , query tuning, stored procedure development, and optimization. Proven experience in SQL Server replication (transactional and merge), clustering, and availability groups. Familiarity with VMware and running SQL within virtualized environments. Hands-on experience with database performance monitoring tools (e.g., SQL Sentry, Datadog). Exposure to BI/data warehousing tools and techniques is a plus. Experience supporting data systems in regulated industries (healthcare, life sciences, etc.), with working knowledge of HIPAA compliance. Excellent communication, collaboration, and documentation skills. Highly motivated, self-starter , with strong ability to multitask in a fast-paced environment. How we work together We believe that we are better together than apart, this means working in-person at least 3 days per week. About Philips We are a health technology company. We built our entire company around the belief that every human matters, and we won't stop until everybody everywhere has access to the quality healthcare that we all deserve. Do the work of your life to help the lives of others. Learn more about our business . Discover our rich and exciting history . Learn more about our purpose . If you’re interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our culture of impact with care here . #LI-EU #LI-Hybrid
Posted 20 hours ago
5.0 years
20 - 28 Lacs
Bengaluru
On-site
Job Purpose We are seeking a dynamic and skilled Data Scientist to join our analytics team. The ideal candidate will work collaboratively with data scientists, business stakeholders, and subject matter experts to deliver end-to-end advanced analytics projects that support strategic decision-making. This role demands technical expertise, strong problem-solving abilities, and the capability to translate business needs into impactful analytical solutions. Key Responsibilities -Collaborate with internal stakeholders to design and deliver advanced analytics projects. -Independently manage project workstreams with minimal supervision. Identify opportunities where analytics can support or improve business decision-making processes. -Provide innovative solutions beyond traditional analytics methodologies. -Apply strong domain knowledge and technical expertise to develop conceptually sound models and tools. -Mentor and guide junior team members in their professional development. -Communicate analytical findings effectively through clear and impactful presentations. Desired Skills & Experience Relevant Experience: 5+ years of analytics experience in Financial Services (Universal Bank/NBFC/Insurance), Rating Agencies, E-commerce, Retail, or Consulting. Exposure to areas like Customer Analytics, Retail Analytics, Collections & Recovery, Credit Risk Ratings, etc. Statistical & Modeling Expertise: Hands-on experience in techniques such as Logistic & Linear Regression, Bayesian Modeling, Classification, Clustering, Neural Networks, Non-parametric Methods, and Multivariate Analysis. Tools & Languages: Proficiency in R, S-Plus, SAS, STATA. Exposure to Python and SPSS is a plus. Data Handling: Experience with relational databases and intermediate SQL skills. Comfortable working with large datasets using tools like Hadoop, Hive, and MapReduce. Analytical Thinking: Ability to derive actionable insights from structured and unstructured data. Strong problem-solving mindset with the ability to align analytics with business objectives. Communication: Excellent verbal and written communication skills to articulate findings and influence stakeholders. Learning Orientation: Eagerness to learn new techniques and apply creative thinking to solve real-world business problems. Job Types: Full-time, Permanent Pay: ₹2,030,998.06 - ₹2,822,692.49 per year Benefits: Health insurance Provident Fund Schedule: Morning shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): How many years of experience in NBFC/BFSI domain? What's your Notice Period? Experience: Data science: 5 years (Required) Work Location: In person
Posted 20 hours ago
1.0 - 2.0 years
2 - 4 Lacs
Bengaluru
On-site
Job Location: Bangalore (In-person, travel to partner schools within the city) Job Type: Full-time, Permanent Schedule: Day Shift About the Role: We are looking for an enthusiastic and knowledgeable AI & Python Educator to join our STEM education team. This role involves delivering structured, interactive lessons in Artificial Intelligence, Python Programming, and Machine Learning fundamentals for students from Grades 3 to 10 . The ideal candidate should have a solid background in Python, an interest in AI/ML, and a passion for teaching school students in an engaging, project-based format. Minimal electronics exposure (only basic awareness for AI-integrated projects like AIoT) is desirable, but not mandatory. Key Responsibilities: Curriculum Delivery: Teach structured, interactive lessons on Python Programming (basics to intermediate), AI concepts, Machine Learning fundamentals , and real-world AI applications tailored for school students. Hands-On AI Projects: Guide students through practical AI projects such as image classification, object detection, chatbots, text-to-speech systems, face recognition, and AI games using Python and AI tools like Teachable Machine, PictoBlox AI, OpenCV, and scikit-learn . Concept Simplification: Break down complex AI/ML concepts like data classification, regression, clustering, and neural networks into age-appropriate, relatable classroom activities. Classroom Management: Conduct in-person classes at partner schools, ensuring a positive, engaging, and disciplined learning environment. Student Mentoring: Motivate students to think logically, solve problems creatively, and build confidence in coding and AI-based projects. Progress Assessment: Track student progress, maintain performance reports, and share timely, constructive feedback. Technical Troubleshooting: Assist students in debugging Python code, handling AI tools, and resolving software-related queries during class. Stakeholder Coordination: Collaborate with school management and internal academic teams for lesson planning, scheduling, and conducting AI project showcases. Continuous Learning: Stay updated on the latest developments in AI, Python, and education technology through training and workshops. Qualifications: Education: Diploma / BCA / B.Sc / BE / MCA / M.Sc / M.Tech in Computer Science, AI/ML, Data Science, or related fields Experience: 1 to 2 years of teaching or EdTech experience in AI/ML, Python, or STEM education preferred Freshers with a strong Python portfolio and AI project experience are encouraged to apply Skills & Competencies: Strong knowledge of Python Programming (syntax, data structures, file handling, functions, OOPs) Practical understanding of AI & Machine Learning basics Hands-on experience with AI tools like Teachable Machine, PictoBlox AI extension, OpenCV, and scikit-learn Excellent communication, explanation, and classroom management skills Student-friendly, organized, and enthusiastic about AI education Basic knowledge of electronics (optional) — for AIoT integrations (eg. using sensors with AI models) Perks & Benefits: Structured in-house AI & Python training programs Opportunity to work with reputed schools across Bangalore Career growth in the AI & STEM education sector Salary & Employment Details: Salary: ₹20,000 to ₹35,000 per month (based on experience and performance) Job Type: Full-time, Permanent Work Location: In-person, travel to schools within Bangalore city Schedule: Day shift (Monday to Saturday) Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹35,000.00 per month Schedule: Day shift Experience: total work: 1 year (Preferred) Work Location: In person
Posted 20 hours ago
5.0 years
4 - 6 Lacs
Bengaluru
On-site
Job Title: Senior AI Engineer Location: Bengaluru, India - (Hybrid) At Reltio®, we believe data should fuel business success. Reltio's AI-powered data unification and management capabilities—encompassing entity resolution, multi-domain master data management (MDM), and data products—transform siloed data from disparate sources into unified, trusted, and interoperable data. Reltio Data Cloud™ delivers interoperable data where and when it's needed, empowering data and analytics leaders with unparalleled business responsiveness. Leading enterprise brands—across multiple industries around the globe—rely on our award-winning data unification and cloud-native MDM capabilities to improve efficiency, manage risk and drive growth. At Reltio, our values guide everything we do. With an unyielding commitment to prioritizing our "Customer First", we strive to ensure their success. We embrace our differences and are "Better Together" as One Reltio. We are always looking to "Simplify and Share" our knowledge when we collaborate to remove obstacles for each other. We hold ourselves accountable for our actions and outcomes and strive for excellence. We "Own It". Every day, we innovate and evolve, so that today is "Always Better Than Yesterday". If you share and embody these values, we invite you to join our team at Reltio and contribute to our mission of excellence. Reltio has earned numerous awards and top rankings for our technology, our culture and our people. Reltio was founded on a distributed workforce and offers flexible work arrangements to help our people manage their personal and professional lives. If you're ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to enable digital transformation with connected data, let's talk! Job Summary: As a Senior AI Engineer at Reltio, you will be a core part of the team responsible for building intelligent systems that enhance data quality, automate decision-making, and drive entity resolution at scale. You will work with cross-functional teams to design and deploy advanced AI/ML solutions that are production-ready, scalable, and embedded into our flagship data platform. This is a high-impact engineering role with exposure to cutting-edge problems in entity resolution, deduplication, identity stitching, record linking, and metadata enrichment . Job Duties and Responsibilities: Design, implement, and optimize state-of-the-art AI/ML models for solving real-world data management challenges such as entity resolution, classification, similarity matching, and anomaly detection. Work with structured, semi-structured, and unstructured data to extract signals and engineer intelligent features for large-scale ML pipelines. Develop scalable ML workflows using Spark, MLlib, PyTorch, TensorFlow, or MLFlow , with seamless integration into production systems. Translate business needs into technical design and collaborate with data scientists, product managers, and platform engineers to operationalize models. Continuously monitor and improve model performance using feedback loops, A/B testing, drift detection, and retraining strategies. Conduct deep dives into customer data challenges and apply innovative machine learning algorithms to address accuracy, speed, and bias. Actively contribute to research and experimentation efforts, staying updated with latest AI trends in graph learning, NLP, probabilistic modeling , etc. Document designs and present outcomes to both technical and non-technical stakeholders , fostering transparency and knowledge sharing. Skills You Must Have: Bachelor's or Master's degree in Computer Science, Machine Learning, Artificial Intelligence , or related field. PhD is a plus. 5+ years of hands-on experience in developing and deploying machine learning models in production environments. Proficiency in Python (NumPy, scikit-learn, pandas, PyTorch/TensorFlow) and experience with large-scale data processing tools ( Spark, Kafka, Airflow ). Strong understanding of ML fundamentals , including classification, clustering, feature selection, hyperparameter tuning, and evaluation metrics. Demonstrated experience working with entity resolution, identity graphs, or data deduplication . Familiarity with containerized environments (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure) Strong debugging, analytical, and communication skills with a focus on delivery and impact. Attention to detail, ability to work independently, and a passion for staying updated with the latest advancements in the field of data science Skills Good to Have: Experience with knowledge graphs, graph-based ML, or embedding techniques . Exposure to deep learning applications in data quality, record matching, or information retrieval . Experience building explainable AI solutions in regulated domains. Prior work in SaaS, B2B enterprise platforms , or data infrastructure companies . Why Join Reltio?* Health & Wellness: Comprehensive Group medical insurance, including your parent,s with additional top-up options. Accidental Insurance Life insurance Free online unlimited doctor consultations An Employee Assistance Program (EAP) Work-Life Balance: 36 annual leaves, which include Sick leaves – 18, Earned Leaves - 18 26 weeks of maternity leave, 15 days of paternity leave Very unique to Reltio - 01 week of additional off as recharge week every year globally Support for home office setup: Home office setup allowance. Stay Connected, Work Flexibly: Mobile & Internet Reimbursement No need to pack a lunch—we've got you covered with a free meal. And many more….. Reltio is proud to be an equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. Reltio is committed to working with and providing reasonable accommodation to applicants with physical and mental disabilities.
Posted 20 hours ago
2.0 - 4.0 years
3 - 10 Lacs
Bengaluru
On-site
About the role Enable data driven decision making across the Tesco business globally by developing analytics solutions using a combination of math, tech and business knowledge You will be responsible for Understands business needs and in depth understanding of Tesco processes Builds on Tesco processes and knowledge by applying CI tools and techniques. Responsible for completing tasks and transactions within agreed KPI's Solves problems by analyzing solution alternatives Engaging with business & functional partners to understand business priorities, ask relevant questions and scope same into a analytical solution document calling out how application of data science will improve decision making In depth understanding of techniques to prepare the analytical data set leveraging multiple complex data set sources Building Statistical models and ML algorithms with practitioner level competency Writing structured, modularized & codified algorithms using Continuous Improvement principles (development of knowledge assets and reusable modules on GitHub, Wiki, etc) with expert competency - Building easy visualization layer on top of the algorithms in order to empower end-users to take decisions - this could be on a visualization platform (Tableau / Python) or through a recommendation set through PPTs Working with the line manager to ensure application / consumption and proactively identifying opportunities to help the larger Tesco business with areas of improvement Keeping up-to-date with the latest in data science and retail analytics and disseminating the knowledge among colleagues You will need - 2 - 4 years experience in data science application in Retail or CPG Preferred Functional experience: Marketing, Supply Chain, Customer, Merchandising, Operations, Finance or Digital Applied Math: Applied Statistics, Design of Experiments, Regression, Decision Trees, Forecasting, Optimization algorithms, Clustering, NLP Tech: SQL, Hadoop, Spark, Python, Tableau, MS Excel, MS Powerpoint, GitHub Business: Basic understanding of Retail domain Soft Skills: Analytical Thinking & Problem solving, Storyboarding, Articulate communication Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation
Posted 20 hours ago
0 years
5 - 9 Lacs
Bengaluru
Remote
Tasks At Daimler Truck, we change today’s transportation and create real impact together. We take responsibility around the globe and work together on making our vision become reality: Leading Sustainable Transportation. As one global team, we drive our progress and success together – everyone at Daimler Truck makes the difference. Together, we want to achieve a sustainable transportation, reduce our carbon footprint, increase safety on and off the track, develop smarter technology and attractive financial solutions. All essential, to fulfill our purpose - for all who keep the world moving. Become part of our global team: You make the difference - YOU MAKE US This team is core of Data & AI department for daimler truck helps developing world class AI platforms in various clouds(AWS, Azure) to support building analytics solutions, dashboards, ML models and Gen AI solutions across the globe. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Snowflake and other cloud-based tools. Implement data ingestion, transformation, and integration processes from various sources (e.g., APIs, flat files, databases). Optimize Snowflake performance through clustering, partitioning, and query tuning. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Ensure data quality, integrity, and security across all data pipelines and storage. Develop and maintain documentation related to data architecture, processes, and best practices. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Working experience with tools like medallion architecture, Matillion, DBT models, SNP Glu are highly recommended WHAT WE OFFER YOU Note: Fixed benefits that apply to Daimler Truck, Daimler Buses, and Daimler Truck Financial Services. Among other things, the following benefits await you with us: Attractive compensation package Company pension plan Remote working Flexible working models, that adapt to individual life phases Health offers Individual development opportunities through our own Learning Academy as well as free access to LinkedIn Learning + two individual benefits Job number: 3690 Publication period: 06/23/2025 - 06/24/2025 Location: Bangalore Organization: Daimler Truck Innovation Center India Private Limited Job Category: Finance/Controlling Working hours: Full time Benefits Inhouse Doctor Good public transport Parking Canteen-Cafeteria Barrier-free workplace To Location: Bengaluru, Daimler Truck Innovation Center India Private Limited Contact Sandip Kumar Mohanty Email: sandip.mohanty@daimlertruck.com
Posted 20 hours ago
6.0 years
0 Lacs
Vadodara
On-site
Skills We are looking for a Candidate with experience managing and maintaining our organization's database systems, ensuring their optimal performance, security, and reliability. Key responsibilities include database deployment and management, backup and disaster recovery planning, performance tuning, and collaborating with developers to design efficient database structures. Proficiency in SQL, experience with major database management systems like Oracle, SQL Server, or MySQL and knowledge of cloud platforms such as AWS or Azure will be an added advantage. Job Location: Vadodara Office Hours: 09:30 am to 7 pm Experience: 6+ Years Role & Responsibilities Roles and Responsibilities: Design, implement, and maintain database systems. Optimize and tune database performance. Develop database schemas, tables, and other objects. Perform database backups and restores. Implement data replication and clustering for high availability. Monitor database performance and suggest improvements. Implement database security measures including user roles, permissions, and encryption. Ensure compliance with data privacy regulations and standards. Perform regular audits and maintain security logs. Diagnose and resolve database issues, such as performance degradation or connectivity problems. Provide support for database-related queries and troubleshooting. Apply patches, updates, and upgrades to database systems. Conduct database health checks and routine maintenance to ensure peak performance. Coordinate with developers and system administrators for database-related issues. Implement and test disaster recovery and backup strategies. Ensure minimal downtime during system upgrades and maintenance. Work closely with application developers to optimize database-related queries and code. Document database structures, procedures, and policies for team members and future reference. Requirements Education/Qualification (if any Certification): A bachelor's degree in IT, computer science or a related field. Requirements: Proven experience as a DBA or in a similar database management role. Strong knowledge of database management systems (e.g., SQL Server, Oracle, MySQL, PostgreSQL, etc.). Experience with performance tuning, database security, and backup strategies. Familiarity with cloud databases (e.g., AWS RDS, Azure SQL Database) is a plus. Strong SQL and database scripting skills. Proficiency in database administration tasks such as installation, backup, recovery, performance tuning, and user management. Experience with database monitoring tools and utilities. Ability to troubleshoot and resolve database-related issues effectively. Knowledge of database replication, clustering, and high availability setups.
Posted 20 hours ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Reference # 312700BR Job Type Full Time Your role This is a role for an experienced SRE proficient in Windows Infrastructure and platforms who has sound technical skills and hands on experience in maintaining and improving all aspects of the Windows server environment for all flavors of Windows from 2012 through to Windows 2022 and Lead a team in an enterprise environment. You will Lead a team in an enterprise environment. The infrastructure environment is a hybrid cloud environment that includes an interdependent combination of Linux and Windows servers, Windows desktops, and high performance data and storage networks. will work with the SRE and Infrastructure engineering teams to improve the firm’s hybrid cloud infrastructure be involved in engineering project work and operational support to increase overall supportability and reliability of the firm’s enterprise technology environment. primarily focus on Windows Server, home grown orchestration and automation tools that are powered by PowerShell, Python and Azure Pipelines, virtualization and enterprise storage technologies but there will also be significant opportunities to work with the wider set of technology platforms in use at the firm. In addition, understand business priorities; adequately prioritize work accordingly to meet project objectives drive for improvements and implement them in regional or global scale communicate and collaborate with other internal partners for planning and coordination of implementation to ensure work is completed in a timely manner you will be expected to drive the execution Your team The hosting multi compute team is a global organization within the Technology Services team providing technology infrastructure platforms to underpin our partners business applications. You will be part of the Windows hosting team, which has a global footprint and works with clients and wider team members spread across the world. Together we drive consistency across business divisions and optimize operations and support costs. You'll be working with the global Windows SRE team. As an SRE Lead, you will be instrumental in maintaining and supporting the hybrid cloud Windows Server estate across the globe and will continuously look for ways to improve the management of these servers through automation. Your expertise degree with 12+ years of experience in supporting and managing large scale Windows Server Deployments (2022, 2019,2016, 2012) good knowledge of Windows Server, Azure integration services, and related technologies (e.g., Active Directory, RBAC, Azure Policy, failover clustering) is required, as is a working knowledge of networking concepts hands on experience with Windows DHCP server administration and Scope migrations., Windows DFS (Both Standalone and Active Directory) and SMB, File Servers, Share and NTFS Permission structure. experience supporting virtualization platforms (e.g., Hyper-V and VMware), enterprise storage platforms, database platforms (e.g., Microsoft SQL Server), and integration with cloud service providers (e.g., AWS, Azure, GCP) is also beneficial understanding of PowerShell including an understanding of its underlying design, is required. Experience in Python and C# is preferred. experience creating and maintaining CD Pipelines and IaC (Infrastructure as Code) deployments using Azure DevOps or GitLab possess excellent verbal and written communication skills in English. Have worked in an agile setup and familiar with the Agile Manifesto basic understanding of the Banking and Finance industries with previous job experience a plus. Exposure to enterprise environment ITIL based processes know-how , Issue Management and effective escalation management , Global User support exposure familiar with Agile and SRE practices About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 20 hours ago
3.0 - 5.0 years
8 - 10 Lacs
Noida
On-site
About Us - Attentive.ai is a leading provider of landscape and property management software powered by cutting-edge Artificial Intelligence (AI). Our software is designed to optimize workflows and help businesses scale up effortlessly in the outdoor services industry. Our Automeasure software caters to landscaping, snow removal, paving maintenance, and facilities maintenance businesses. We are also building Beam AI , an advanced AI engine focused on automating construction take-off and estimation workflows through deep AI. Beam AI is designed to extract intelligence from complex construction drawings, helping teams save time, reduce errors, and increase bid efficiency. Trusted by top US and Canadian sales teams, we are backed by renowned investors such as Sequoia Surge and InfoEdge Ventures." Position Description: As a Research Engineer-II, you will be an integral part of our AI research team focused on transforming the construction industry through cutting-edge deep learning, computer vision and NLP technologies. You will contribute to the development of intelligent systems for automated construction take-off and estimation by working with unstructured data such as blueprint, drawings (including SVGs), and PDF documents. In this role, you will support the end-to-end lifecycle of AI-based solutions — from prototyping and experimentation to deployment in production. Your contributions will directly impact the scalability, accuracy, and efficiency of our products. Roles & Responsibilities Contribute to research and development initiatives focused on Computer Vision, Image Processing , and Deep Learning applied to construction-related data. Build and optimize models for extracting insights from documents such as blueprints, scanned PDFs, and SVG files . Contribute development of multi-modal models that integrate vision with language-based features (NLP/LLMs). Follow best data science and machine learning practices , including data-centric development, experiment tracking, model validation, and reproducibility. Collaborate with cross-functional teams including software engineers, ML researchers, and product teams to convert research ideas into real-world applications. Write clean, scalable, and production-ready code using Python and frameworks like PyTorch , TensorFlow , or HuggingFace . Stay updated with the latest research in computer vision and machine learning and evaluate applicability to construction industry challenges. Skills & Requirements 3–5 years of experience in applied AI/ML and research with a strong focus on Computer Vision and Deep Learning . Solid understanding of image processing , visual document understanding, and feature extraction from visual data. Familiarity with SVG graphics , NLP , or LLM-based architectures is a plus. Deep understanding of unsupervised learning techniques like clustering, dimensionality reduction , and representation learning. Proficiency in Python and ML frameworks such as PyTorch , OpenCV , TensorFlow , and HuggingFace Transformers . Hands-on experience with model optimization techniques (e.g., quantization , pruning , knowledge distillation ). - Good to have Experience with version control systems (e.g., Git ), project tracking tools (e.g., JIRA ), and cloud environments ( GCP , AWS , or Azure ). Familiarity with Docker , Kubernetes , and containerized ML deployment pipelines. Strong analytical and problem-solving skills with a passion for building innovative solutions; ability to rapidly prototype and iterate. Comfortable working in a fast-paced, agile, startup-like environment with excellent communication and collaboration skills. Why Work With Us? Be part of a visionary team building a first-of-its-kind AI solution for the construction industry . Exposure to real-world AI deployment and cutting-edge research in vision and multimodal learning. Culture that encourages ownership, innovation, and growth. Opportunities for fast learning, mentorship, and career progression.
Posted 20 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: Responsibilities as Tableau Administrator Configure and maintain Tableau Server Software Layer. System Administration (includes site creation, server maintenance/Upgrades/patches). Change management including software, hardware upgrades, patches Monitor server activity/usage statistics to identify possible performance issues/enhancements Partner with business to design tableau KPI scorecards dashboards. Performance tuning / Server management of tableau server environment (clustering, Load balancing). Create/Manage Groups, Workbooks and Projects, Database Views, Data Sources and Data Connections. Proactively communicate with the Customer/Stakeholders to resolve issues and get work done. Set up a governance process around Tableau dashboard processes Create and host tableau extension API Location: This position can be based in any of the following locations: Chennai Current Guardian Colleagues: Please apply through the internal Jobs Hub in Workday
Posted 20 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TCS has been a great pioneer in feeding the fire of Young Techies like you. We are a global leader in the technology arena and there's nothing that can stop us from growing together. TCS Hiring for Network Data Experience Range: - 06 To 8 Yrs Job Locations : Hyderabad & Kolkata Job Description 1.Experience in designing; supporting and implementing IP based networks for large enterprises. 2.Strong knowledge and experience with Cisco Nexus switches,ASRs, ISR and 9000 series ,4500, Catalyst switches etc. 3.Strong knowledge and experience with Cisco routing and switching protocols (ie. BGP,EIGRP, MPLS,QoS, STP, VTP etc.) 4.Configuring Cisco Wireless Access points & controllers and CISCO ISE. 5.Hand-ons Experience with CISCO firewalls and clustering. 6.Experience with analyzing traffic and utilizing packet sniffer utilities (ie. Wireshark, Netscout) 7.Familiarity with management tools such as Solar Winds,ITNM etc. 8.Expertise in LAN and WAN technologies to provide advanced troubleshooting and escalation support 9.Strong documentation skills and ability to create high-level and low-level designs that meet business requirements. 10.Switching and Routing on Cisco products 11.Configuring , troubleshooting of client to site and Site to Site VPNs . 12. SDN technologies like ACI , NSX , SD WAN Cisco Network Engineer Responsibilities: Analysing existing hardware, software, and networking systems. Creating and implementing scalable Cisco networks according to client specifications. Testing and troubleshooting installed Cisco systems. Resolving technical issues with networks, hardware, and software. Performing speed and security tests on installed networks. Applying network security upgrades. Upgrading/replacing hardware and software systems when required. Creating and presenting networking reports. Training end-users on installed Cisco networking products. Cisco Network Engineer Requirements: Bachelor's degree in computer science, networking administration, information technology, or a similar field. CCNA, CCNP certification. At least 5 years' experience as a network engineer. Detailed knowledge of Cisco networking systems. Experience with storage engineering, wide-area networking, and network virtualization. Advanced troubleshooting skills. Ability to identify, deploy, and manage complex networking systems. Good communication and interpersonal skills. Experience with end-user training. Minimum Qualification: 15 years of full-time education Disclaimer: We encourage you to register at www.tcs.com/careers for exploring an exciting career with TCS. At the time of your application to TCS, the personal data contained in your application and resume will be collected by TCS and processed for the purpose of TCS's recruitment related activities. Your personal data will be retained by TCS for as long as TCS determines it is necessary to evaluate your application for employment as per our retention policy. You have the right to request TCS for temporary/permanent exclusion of your candidature from any recruitment related communication. For any such request you may write to careers.tcs.com.
Posted 21 hours ago
6.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
JD – AI/ML Engineer This is a full-time position with D Square Consulting Services Pvt Ltd Required Experience - 6-7 years Location – Bangalore Work mode - Onsite The candidate who can join within 30 days Job Summary The AI/ML Engineer will lead end-to-end design, development, and deployment of advanced, value-driven AI/ML solutions for digital marketing and analytics. Drive innovation, leverage cutting-edge techniques, and standardize multi-cloud model deployment through collaboration, delivering profound data insights. Required Qualifications- Bachelor’s degree (or higher preferred) in Computer Science, Data Science, ML, Mathematics, Statistics, Economics, or related fields with emphasis on quantitative methods. 6-7 years’ experience in software engineering with deep, hands-on expertise in the full lifecycle of ML model development, deployment, and operationalization. Demonstrated ability to write highly robust, efficient, and scalable Python, Java, Spark, and SQL code, adhering to industry best practices. Extensive experience with major ML frameworks (TensorFlow, PyTorch, Scikit-learn) and advanced deep learning libraries, including optimization. Strong, in-depth understanding of diverse ML algorithms (e.g., advanced regression, classification, clustering, RNNs, CNNs, transformers, time series, reinforcement learning), sophisticated data structures, and enterprise software design. Significant experience deploying and managing AI/ML models on major cloud platforms (Azure, AWS, GCP). Proven experience with LLMs and generative AI (fine-tuning, prompt engineering, deployment) is highly desirable. Exceptional problem-solving skills in a fast-paced, collaborative remote environment. Excellent communication and interpersonal skills, with the ability to effectively collaborate and influence diverse global teams remotely. Experience with classification, time series forecasting, customer lifetime value models, LLMs, and generative AI, preferably from the Retail, e-commerce, or CPG industry. Responsibilities- Lead cross-functional teams to deliver and scale complex AI/ML solutions (DL, NLP, optimization). Architect, design, develop, train, and evaluate high-performance, production-ready AI/ML models, ensuring scalability and robustness. Drive implementation, deployment, and maintenance of AI/ML solutions, optimizing inference, and documenting processes. Oversee data exploration, advanced preprocessing, complex feature engineering, and robust data pipeline development. Establish strategies for continuous testing, validation, and monitoring of deployed AI/ML models to ensure accuracy and reliability. Partner with senior stakeholders to translate business requirements into scalable AI solutions that deliver measurable value. Act as a primary SME on AI/ML model development, MLOps, and deployment, influencing global data science platforms. Continuously research and champion the adoption of the latest AI/ML technologies, algorithms, and best practices to maximize business value. Foster innovative thinking and continuous improvement, seeking superior ways of working for teams and partners.
Posted 21 hours ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We are committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a growing internal community and are committed to creating a workplace that looks like the world that we serve. Pay and Benefits: Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits, based on location Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact you will have in this role: The Database Administrator will provide database administrative support for all DTCC environments including Development, QA, client test, and our critical high availability production environment and DR Data Centers. Extensive knowledge on all aspects of MSSQL database administration and the ability to support other database platforms including both Aurora PostgreSQL and Oracle. This DBA will have a high level of impact in the generation of new processes and solutions, while operating under established procedures and processes in a critically important Financial Services infrastructure environment. The Ideal candidate will ensure optimal performance, data security and reliability of our database infrastructure. What You'll Do: Software Installation, configure and maintain Oracle server instances Implement and handle High availability solutions including Always ON availability groups and clustering. Support development, QA, PSE and Production environments using ServiceNow ticketing system. Review production performance reports for variances from normal operation. Optimize SQL queries and indexes for better efficiency. Analyze query and recommend the tuning strategies. Maintain database performance by calculating optimum values for database parameters; implementing new releases; completing maintenance requirements; evaluating computer operating systems and hardware products. Perform database backup and recovery strategy using tools like SQL server backup, log shipping and other technologies. Provide 3rd level support for DTCC critical production environments. Participate in root cause analysis for database issues. Prepare users by conducting training; providing information; and resolving problems. Maintains quality service by establishing and enforcing organizational standards. Setup and maintain database replication and clustering solutions. Maintains professional and technical knowledge by attending educational workshops; reviewing professional publications; establishing personal networks; benchmarking innovative practices; participating in professional societies. Will have shared responsibility for off-hour support. Maintain documentation on database configurations and procedures. Provide leadership and direction for the Architecture, Design, Maintenance and L1, L2 and L3 Level Support of a 7 x 24 global infrastructure. Qualifications: Bachelor's degree or equivalent experience Talents Needed for Success: Strong Oracle Experience of 19c, 21c and 22c. A minimum 4+ years of proven relevant experience in Oracle Solid experience in Oracle Database administration Strong knowledge in Python and Angular. Working knowledge of Oracle’s Golden gate Replication technology Demonstrate strong performance Tuning and Optimization skills in MSSQL, PostgreSQL and Oracle databases. Good Experience in High Availability and Disaster recovery (HA/DR) options for SQL server. Good experience in Backup and restore processes. Proficiency in power shell scripting for automation. Possess Good interpersonal skills and ability to coordinate with various stakeholders. Follow standard processes on Organizational change / Incident management / Problem management. Demonstrated ability to solve complex systems and database environment issues. Actual salary is determined based on the role, location, individual experience, skills, and other considerations. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 21 hours ago
5.0 years
8 - 45 Lacs
Hyderabad, Telangana, India
On-site
Industry & Sector: Enterprise IT Infrastructure & Cloud Services in India. A fast-growing managed services provider delivers secure, high-availability virtualization platforms for Fortune 500 and digital-native businesses. About The Opportunity As a VMware Platform Engineer, you will design, deploy, and operate mission-critical virtualization estates on-site at our client facilities, ensuring performance, security, and scalability across private and hybrid clouds. Role & Responsibilities Engineer and harden vSphere, ESXi, and vSAN clusters to deliver 99.99% uptime. Automate build, patching, and configuration tasks using PowerCLI, Ansible, and REST APIs. Monitor capacity, performance, and logs via vRealize Operations and generate improvement plans. Lead migrations, upgrades, and disaster-recovery drills, documenting runbooks and rollback paths. Collaborate with network, storage, and security teams to enforce compliance and zero-trust policies. Provide L3 support, root-cause analysis, and mentoring to junior administrators. Skills & Qualifications Must-Have 5+ years hands-on with VMware vSphere 6.x/7.x in production. Expertise in ESXi host deployment, clustering, vMotion, DRS, and HA. Strong scripting with PowerCLI or Python for automation. Solid grasp of Linux server administration and TCP/IP networking. Experience with backup, replication, and DR tooling (Veeam, SRM, etc.). Preferred Exposure to vRealize Suite, NSX-T, or vCloud Director. Knowledge of container platforms (Tanzu, Kubernetes) and CI/CD pipelines. VMware Certified Professional (VCP-DCV) or higher. Benefits & Culture On-site, enterprise-scale environments offering complex engineering challenges. Continuous learning budget for VMware and cloud certifications. Collaborative, performance-driven culture with clear growth paths. Workplace Type: On-Site | Location: India Skills: automation,rest apis,vmware,vmware vsphere,ansible,backup and replication tools (veeam, srm),vsan,linux server administration,disaster recovery,powercli,tcp/ip networking,scripting,vrealize operations,vmware vsphere 6.x/7.x,platform engineers (vmware),esxi
Posted 21 hours ago
6.0 years
0 Lacs
Kochi, Kerala, India
On-site
Work closely with internal BU’s and business partners (clients) to understand their business problems and translate them into data science problems ▪ Design intelligent data science solutions that delivers incremental value the end stakeholders ▪ Work closely with data engineering team in identifying relevant data and pre-processing the data to suitable models ▪ Develop the designed solutions into statistical machine learning models, AI models using suitable tools and frameworks ▪ Work closely with the business intelligence team to build BI system and visualizations that delivers the insights of the underlying data science model in most intuitive ways possible. ▪ Work closely with application team to deliver AI/ML solutions as modular offerings. Skills/Specification ▪ Masters/Bachelor’s in Computer Science or Statistics or Economics ▪ At least 6 years of experience working in Data Science field and is passionate about numbers, quantitative problems ▪ Deep understanding of Machine Learning models and algorithms ▪ Experience in analysing complex business problems, translating it into data science problems and modelling data science solutions for the same ▪ Understanding of and experience in one or more of the following Machine Learning algorithms:- Regression , Time Series Logistic Regression, Naive Bayes, kNN, SVM, Decision Trees, Random Forest, k-Means Clustering etc. NLP, Text Mining LLM (GPTs) -OpenAI , Azure OpenAI, AWS Bed rock, Gemini, Llama, Deepseek etc (knowledge on fine tuning /custom training GPTs would be an add-on advantage). Deep Learning, Reinforcement learning algorithm ▪ Understanding of and experience in one or more of the machine learning frameworks - TensorFlow, Caffe, Torch etc. ▪ Understanding of and experience of building machine learning models using various packages in Python ▪ Knowledge & Experience on SQL, Relational Databases, No SQL Databases and Datawarehouse concepts ▪ Understanding of AWS/Azure Cloud architecture ▪ Understanding on the deployment architectures of AI/ML models (Flask, Azure function, AWS lambda) ▪ Knowledge on any BI and visualization tools is add-on (Tableau/PowerBI/Qlik/Plotly etc). ▪To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data scientists ▪ Desire for numbers and patterns
Posted 21 hours ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Arctera Arctera keeps the world’s IT systems working. We can trust that our credit cards will work at the store, that power will be routed to our homes and that factories will produce our medications because those companies themselves trust Arctera. Arctera is behind the scenes making sure that many of the biggest organizations in the world – and many of the smallest too – can face down ransomware attacks, natural disasters, and compliance challenges without missing a beat. We do this through the power of data and our flagship products, Insight, InfoScale and Backup Exec. Illuminating data also helps our customers maintain personal privacy, reduce the environmental impact of data storage, and defend against illegal or immoral use of information. It’s a task that continues to get more complex as data volumes surge. Every day, the world produces more data than it ever has before. And global digital transformation – and the arrival of the age of AI – has set the course for a new explosion in data creation. Joining the Arctera team, you’ll be part of a group innovating to harness the opportunity of the latest technologies to protect the world’s critical infrastructure and to keep all our data safe. This position is with InfoScale (Data Resiliency) offering of the Arctera, which is software-defined storage and availability solution that helps organization manage information resiliency and protection across physical, virtual and cloud environments. It provides high availability and disaster recovery for mission critical applications. Responsibilities We are looking for candidates who have experience with storage and cloud technology for data resiliency solution. You should also have an eye for great design and a knack for pushing projects from conception all the way to customers. In this role, you will design and develop data protection solutions using the latest technologies. You will own product quality and overall customer experience. You will also propose technical solutions to product/service problems while refining, designing and implementing software components in line with technical requirements. The Sr. Software Engineer will productively work in a highly collaborative agile team, coach junior team members, actively participate in knowledge sharing all while communicating across teams in a multinational environment. Minimum Required Skills Include MS/BS in Computer Science/Computer Engineering or related field of study with 5+ years of relevant experience Full understanding of storage and cloud technologies, emerging standards and engineering best practices Strong communication skills, both oral and written Hands on experience in developing enterprise products with any of the programming language C/C++/Python/Go/RESTful APIs Hands on experience in developing Kubernetes custom controllers/operators and working knowledge of k8s orchestration platforms - OpenShift/EKS/AKS Designs, develops and maintains high quality code for product components, focusing on implementation. Solid knowledge of algorithms and design patterns Solid knowledge of clustering (HA-DR) concepts, systems programming Strong focus on knowledge and application of industry standard SDLC process including design, coding, debugging, and testing practices for large enterprise grade products is absolute must Designs, develops and maintains high quality code for product components, focusing on implementation. Solid knowledge of algorithms and design patterns Desired Skills Include Knowledge of operating Systems: LINUX/UNIX, Object Oriented Language· Agile Process Experience in developing CNI/CSI plugins/drivers’ development Experience with DevOps and tools (Prometheus, EFK, Helm, Red Hat Registry, Tiller, etc) related to Container technology Experience in Agile development methodologies including unit testing and TDD (test-driven development) Extra credit for Open-Source Contributions: active participation in CNCF SIGs, upstream contributions to K8S Ability to communicate and collaborate among cross-functional teams in a multinational environment
Posted 21 hours ago
12.0 years
0 Lacs
Pune, Maharashtra, India
Remote
TCS has been a great pioneer in feeding the fire of young techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. What we are looking for Role: MQ Admin Experience Range: 8 – 12 Years Location: Pune/Bengaluru Must Have: 1) Administrate Websphere MQ v7.x,8.x, 9.x 2) Building non prod and prod qmgrs as per requirements from clients. 3) Ability to run MQSC commands and remote mq administration. 4) Very good knowledge on Distributed queuing and Clustering. 5) Good knowledge to use IBM MQ utilities like qload, runmqdlq, saveqmgr and tools like MQ explorer, Rfhutil 6) Support the clients in a 24x7 model 7) Linux and Solaris hands on knowledge is expected. 8) Knowledge to handle ITIL components such as IM, PM, CM 9) Knowledge on SSL certificates is a must 10) knowledge on MQ Migrations and Fix Pack Installation 11) Hands on client and server architecture . 12) Ability to support application team with their testing and deployments. Good to Have: 1) Good communication skills 2) Require good communication skills to talk to users, on understanding their requirements and able to provide solution. 3) Work experience on MQ Administration, Define MQ managers ,objects, troubleshoot all MQ issues. 4) Work experience on several MQ tools like IBM keyman tool, qload, Mo71, RfHutil, Mq explorer. 5) Strong in decision and problem solving skills 6) Knowledge on Unix/Perl scripting will be considered an advantage too 7) Knowledge on Networking, Firewall, and Unix based OS Good Production support experience. 8) Knowledge on PUB/SUB, HA, Clustering, Openshift, MQ clients on Fabric. 9) Flexible to work on shifts & provide coverage on weekends 10) Financial domain Knowledge . Essential: L2 Support activities on IBM MQ Implementing client requests and migrating to latest environments Closely work with L3 to implement their ideas and tasks. Maintaining prod env with latest ifixes and fixpacks Minimum Qualification: •15 years of full-time education •Minimum percentile of 50% in 10th, 12th, UG & PG (if applicable)
Posted 21 hours ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We are committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a growing internal community and are committed to creating a workplace that looks like the world that we serve. Pay and Benefits: Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits, based on location Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact you will have in this role: The Database Administrator will provide database administrative support for all DTCC environments including Development, QA, client test, and our critical high availability production environment and DR Data Centers. Extensive knowledge on all aspects of MSSQL database administration and the ability to support other database platforms including both Aurora PostgreSQL and Oracle. This DBA will have a high level of impact in the generation of new processes and solutions, while operating under established procedures and processes in a critically important Financial Services infrastructure environment. The Ideal candidate will ensure optimal performance, data security and reliability of our database infrastructure. What You'll Do: Software Installation, configure and maintain SQL server instances (On-prem and cloud based) Implement and handle High availability solutions including Always ON availability groups and clustering. Support development, QA, PSE and Production environments using ServiceNow ticketing system. Review production performance reports for variances from normal operation. Optimize SQL queries and indexes for better efficiency. Analyze query and recommend the tuning strategies. Maintain database performance by calculating optimum values for database parameters; implementing new releases; completing maintenance requirements; evaluating computer operating systems and hardware products. Perform database backup and recovery strategy using tools like SQL server backup, log shipping and other technologies. Provide 3rd level support for DTCC critical production environments. Participate in root cause analysis for database issues. Prepare users by conducting training; providing information; and resolving problems. Maintains quality service by establishing and enforcing organizational standards. Setup and maintain database replication and clustering solutions. Maintains professional and technical knowledge by attending educational workshops; reviewing professional publications; establishing personal networks; benchmarking innovative practices; participating in professional societies. Will have shared responsibility for off-hour support. Maintain documentation on database configurations and procedures. Provide leadership and direction for the Architecture, Design, Maintenance and L1, L2 and L3 Level Support of a 7 x 24 global infrastructure. Qualifications: Bachelor's degree or equivalent experience Talents Needed for Success: Strong Oracle Experience of 19c, 21c and 22c. A minimum 4+ years of proven relevant experience in SQL Solid understanding in MSSQL Server and Aurora Postgres database Strong knowledge in Python and Angular. Working knowledge of Oracle’s Golden gate Replication technology Demonstrate strong performance Tuning and Optimization skills in MSSQL, PostgreSQL and Oracle databases. Good Experience in High Availability and Disaster recovery (HA/DR) options for SQL server. Good experience in Backup and restore processes. Proficiency in power shell scripting for automation. Possess Good interpersonal skills and ability to coordinate with various stakeholders. Follow standard processes on Organizational change / Incident management / Problem management. Demonstrated ability to solve complex systems and database environment issues. Actual salary is determined based on the role, location, individual experience, skills, and other considerations. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 22 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for clustering roles in India is thriving, with numerous opportunities available for job seekers with expertise in this area. Clustering professionals are in high demand across various industries, including IT, data science, and research. If you are considering a career in clustering, this article will provide you with valuable insights into the job market in India.
Here are 5 major cities in India actively hiring for clustering roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi
The average salary range for clustering professionals in India varies based on experience levels. Entry-level positions may start at around INR 3-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-20 lakhs per annum.
In the field of clustering, a typical career path may look like: - Junior Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead
Apart from expertise in clustering, professionals in this field are often expected to have skills in: - Machine Learning - Data Analysis - Python/R programming - Statistics
Here are 25 interview questions for clustering roles: - What is clustering and how does it differ from classification? (basic) - Explain the K-means clustering algorithm. (medium) - What are the different types of distance metrics used in clustering? (medium) - How do you determine the optimal number of clusters in K-means clustering? (medium) - What is the Elbow method in clustering? (basic) - Define hierarchical clustering. (medium) - What is the purpose of clustering in machine learning? (basic) - Can you explain the difference between supervised and unsupervised learning? (basic) - What are the advantages of hierarchical clustering over K-means clustering? (advanced) - How does DBSCAN clustering algorithm work? (medium) - What is the curse of dimensionality in clustering? (advanced) - Explain the concept of silhouette score in clustering. (medium) - How do you handle missing values in clustering algorithms? (medium) - What is the difference between agglomerative and divisive clustering? (advanced) - How would you handle outliers in clustering analysis? (medium) - Can you explain the concept of cluster centroids? (basic) - What are the limitations of K-means clustering? (medium) - How do you evaluate the performance of a clustering algorithm? (medium) - What is the role of inertia in K-means clustering? (basic) - Describe the process of feature scaling in clustering. (basic) - How does the GMM algorithm differ from K-means clustering? (advanced) - What is the importance of feature selection in clustering? (medium) - How can you assess the quality of clustering results? (medium) - Explain the concept of cluster density in DBSCAN. (advanced) - How do you handle high-dimensional data in clustering? (medium)
As you venture into the world of clustering jobs in India, remember to stay updated with the latest trends and technologies in the field. Equip yourself with the necessary skills and knowledge to stand out in interviews and excel in your career. Good luck on your job search journey!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane