Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
FICO (NYSE: FICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity - VP, Software Engineering. What You'll Contribute Work closely with product managers to understand priorities and usage scenarios of product features. Collaborate with user experience personnel to understand personas within usage scenarios. Work with architects to drive the design for your software platform capability. Collaborate within working groups of software engineers to follow software engineering standards, guidance, and processes. Continuously improve engineering practices for the software platform to support efficiency, reliability, and serviceability goals. Assist research, case studies and prototypes on technologies to ensure the software platform remains the leading analytic decisioning platform. Coach other software engineers on creating their domain designs. Collaborate with QA engineers to design and implement non-functional tests. What We're Seeking Bachelor's/Master's degree in Computer Science or related discipline. 8+ years of experience in designing, building, deploying, and operating commercial software that integrates sophisticated AI & ML stateful algorithms executing in low milliseconds. Experience with commercial software that covers the entire life cycle of intelligence execution, from authoring to execution to observing. 8+ years of experience in building sophisticated runtimes in C++. Ability to define and drive design transformation to an end state that is based on simplicity, modern software design patterns, open-source software, and cloud environments. Technical expertise across all deployment models on public cloud, private cloud, and on-premises infrastructure. Experience creating, documenting, and communicating software designs for complex products. Skilled in domain-driven, event-driven, and microservice architectures. Proficient in building, tracking, and communicating plans within agile processes. Experience supporting production software deployments. Proficient with commercial software product processes. Experience with multiple public cloud technologies is a plus, e.g., AWS, Google, Azure. Experience with Kubernetes, including its control plane, ecosystem, and Docker is a plus. CMake experiences are beneficial. Preferred experience using artificial intelligence and machine learning technologies. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today - Big Data analytics. You'll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide: . Credit Scoring - FICO Scores are used by 90 of the top 100 US lenders. . Fraud Detection and Security - 4 billion payment cards globally are protected by FICO fraud systems. . Lending - 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICO's solutions, placing us among the world's top 100 software companies by revenue. We help many of the world's largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people - just like you - who thrive on the collaboration and innovation that's nurtured by a diverse and inclusive environment. We'll provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and we're proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they don't meet all stated qualifications. While our qualifications are clearly related to role success, each candidate's profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to the FICO Privacy policy at
Posted 2 days ago
2.0 - 4.0 years
2 - 4 Lacs
Hyderabad, Telangana, India
On-site
Let s do this. Let s change the world. This role supports the day-to-day operation and maintenance of server systems and infrastructure. The engineer will perform tasks such as system monitoring, security patching, automations, and troubleshooting under guidance from senior team members, contributing to uptime and compliance. The ideal candidate will have a consistent record in Compute (WINTEL) Infrastructure Operations and have a passion for fostering innovation and excellence in the biotechnology industry. Additionally, collaboration with multi-functional and global teams is required to ensure seamless integration and operational excellence. The ideal candidate will have a solid background in Windows Servers service delivery and operations. This role demands the ability to drive and deliver against key organizational pivotal initiatives, foster a collaborative environment, and deliver high-quality results in a matrixed organizational structure. Please note, this is an on-site role based in Hyderabad Administer, monitor, and support Windows Server Maintain and monitor server security and compliance standards Monitor server health and generate routine reports Perform patching, updates and backups to ensure compliance Support in incident resolution and troubleshooting Document changes and standard operating procedures What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience: Bachelors degree with 2-4 years of experience OR Master s degree with 1-2 years of experience OR Diploma with 5+ years of relevant experience Functional Skills: Must-Have Skills: Experience with Windows server operations Experience with virtualization (VMware, Acropolis) Basic scripting knowledge (PowerShell, Ansible, Python) Familiarity with ITIL or incident management workflows Change management expertise Excellent troubleshooting and problem solving skills Good-to-Have Skills: Exposure to cloud environments (AWS, Azure) Interest in automation tools and infrastructure-as-code Exposure to monitoring tools (Dynatrace) Knowledge of configuration management tools (Ansible, SSM or MECM) Professional Certifications: CompTIA Server+ or Microsoft Fundamentals (preferred) Soft Skills: Meticulous and organized Effective communicator Ability to follow procedures accurately Willingness to learn and grow
Posted 4 days ago
0.0 - 5.0 years
0 - 3 Lacs
Chennai
Work from Office
This is an urgent and fast filling position - Need immediate joiners OR l1 month notice period We are Looking for 1)Junior AI/ML Engineer - Positions 2 open 2)Mid-level AI/ML Engineer -1 position open 3)Lead AI/ML Engineer - 1 position open Location: Ambattur, Chennai Fulltime position Job Summary: We are looking for a AI/ML Engineer to develop, optimize, and deploy machine learning models for real-world applications. You will work on end-to-end ML pipelines , collaborate with cross-functional teams, and apply AI techniques such as NLP, Computer Vision, and Time-Series Forecasting . This role offers opportunities to work on cutting-edge AI solutions while growing your expertise in model deployment and optimization. Key Responsibilities: Design, build, and optimize machine learning models for various business applications. Develop and maintain ML pipelines , including data preprocessing, feature engineering, and model training. Work with TensorFlow, PyTorch, Scikit-learn, and Keras for model development. Deploy ML models in cloud environments (AWS, Azure, GCP) and work with Docker/Kubernetes for containerization. Perform model evaluation, hyperparameter tuning, and performance optimization . Collaborate with data scientists, engineers, and product teams to deliver AI-driven solutions. Stay up to date with the latest advancements in AI/ML and implement best practices. Write clean, scalable, and well-documented code in Python or R. Technical Skills: Programming Languages: Proficiency in languages like Python. Python is particularly popular for developing ML models and AI algorithms due to its simplicity and extensive libraries like NumPy, Pandas, and Scikit-learn. Machine Learning Algorithms: Should have a deep understanding of supervised learning (linear regression, decision trees, SVM), unsupervised learning, and reinforcement learning. Data Management and Analysis: Skills in data cleaning, feature engineering, and data transformation are crucial. Deep Learning: Familiarity with neural networks, CNNs, RNNs, and other architectures is important. Machine Learning Frameworks and Libraries: Experience with TensorFlow, PyTorch, Keras, or Scikit-learn is valuable. Natural Language Processing (NLP): Familiarity with NLP techniques like word2vec, sentiment analysis, and summarization can be beneficial. Cloud Computing: Experience with cloud-based services like AWS SageMaker, Google Cloud AI Platform, or Microsoft Azure Machine Learning. Data Preprocessing: Skills in handling missing data, data normalization, feature scaling, and data transformation. Feature Engineering: Ability to create new features from existing data to improve model performance. Data Visualization: Familiarity with visualization tools like Matplotlib, Seaborn, Plotly, or Tableau. Containerization: Knowledge of containerization tools like Docker and Kubernetes. Databases : Understanding of relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB). Data Warehousing: Familiarity with data warehousing concepts and tools like Amazon Redshift or Google BigQuery. Computer Vision: Understanding of computer vision concepts and techniques like object detection, segmentation, and image classification. Reinforcement Learning: Knowledge of reinforcement learning concepts and techniques like Q-learning and policy gradients.
Posted 6 days ago
2.0 - 5.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Description : 2-5 years of hands-on software engineering experience Excellent hands on experience in Java(17+, 21 preferred) Experience in developing Spring Boot and REST services. Experience in unit test frameworks. Ability to provide solutions based on the business requirements. Ability to collaborate with cross-functional teams. Ability to work with global teams and a flexible work schedule. Must have excellent problem-solving skills and be customer-centric. Excellent communication skills. Job Description: Education: Bachelor's/Master's degree in computer science or equivalent. Mandatory Skills: 2-5 years of hands-on software engineering experience Excellent hands on experience in Java(17+, 21 preferred) Experience in developing Spring Boot and REST services. Experience in unit test frameworks. Ability to provide solutions based on the business requirements. Ability to collaborate with cross-functional teams. Ability to work with global teams and a flexible work schedule. Must have excellent problem-solving skills and be customer-centric. Excellent communication skills. Preferred Skills: Experience with Microservices, CI/CD, Event Oriented Architectures and Distributed Systems Experience with cloud environments (e.g., Google Cloud Platform, Azure, Amazon Web Services, etc.) Familiarity with web technologies (e,g,, JavaScript, HTML, CSS), data manipulation (e.g., SQL), and version control systems (e.g., GitHub/GitLab) Familiarity with DevOps practices/principles, Agile/Scrum methodologies, CI/CD pipelines and the product development lifecycle Familiarity with modern web APIs and full stack frameworks. Experience with Kubernetes, Kafka, Postgresql Experience developing eCommerce systems - especially B2B eCommerce - is a plus. 2-5 years of hands-on software engineering experience Excellent hands on experience in Java(17+, 21 preferred) Experience in developing Spring Boot and REST services. Experience in unit test frameworks. Ability to provide solutions based on the business requirements. Ability to collaborate with cross-functional teams. Ability to work with global teams and a flexible work schedule. Must have excellent problem-solving skills and be customer-centric. Excellent communication skills.
Posted 1 week ago
3.0 - 4.0 years
3 - 4 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
AI Model Deployment & Integration: Deploy and manage AI/ML models, including traditional machine learning and GenAI solutions (e.g., LLMs, RAG systems). Implement automated CI/CD pipelines for seamless deployment and scaling of AI models. Ensure efficient model integration into existing enterprise applications and workflows in collaboration with AI Engineers. Optimize AI infrastructure for performance and cost efficiency in cloud environments (AWS, Azure, GCP). Monitoring & Performance Management: Develop and implement monitoring solutions to track model performance, latency, drift, and cost metrics. Set up alerts and automated workflows to manage performance degradation and retraining triggers. Ensure responsible AI by monitoring for issues such as bias, hallucinations, and security vulnerabilities in GenAI outputs. Collaborate with Data Scientists to establish feedback loops for continuous model improvement. Automation & MLOps Best Practices: Establish scalable MLOps practices to support the continuous deployment and maintenance of AI models. Automate model retraining, versioning, and rollback strategies to ensure reliability and compliance. Utilize infrastructure-as-code (Terraform, CloudFormation) to manage AI pipelines. Security & Compliance: Implement security measures to prevent prompt injections, data leakage, and unauthorized model access. Work closely with compliance teams to ensure AI solutions adhere to privacy and regulatory standards (HIPAA, GDPR). Regularly audit AI pipelines for ethical AI practices and data governance. Collaboration & Process Improvement: Work closely with AI Engineers, Product Managers, and IT teams to align AI operational processes with business needs. Contribute to the development of AI Ops documentation, playbooks, and best practices. Continuously evaluate emerging GenAI operational tools and processes to drive innovation. Qualifications & Skills: Education: Bachelor's or Master's degree in Computer Science, Data Engineering, AI, or a related field. Relevant certifications in cloud platforms (AWS, Azure, GCP) or MLOps frameworks are a plus. Experience: 3+ years of experience in AI/ML operations, MLOps, or DevOps for AI-driven solutions. Hands-on experience deploying and managing AI models, including LLMs and GenAI solutions, in production environments. Experience working with cloud AI platforms such as Azure AI, AWS SageMaker, or Google Vertex AI. Technical Skills: Proficiency in MLOps tools and frameworks such as MLflow, Kubeflow, or Airflow. Hands-on experience with monitoring tools (Prometheus, Grafana, ELK Stack) for AI performance tracking. Experience with containerization and orchestration tools (Docker, Kubernetes) to support AI workloads. Familiarity with automation scripting using Python, Bash, or PowerShell. Understanding of GenAI-specific operational challenges such as response monitoring, token management, and prompt optimization. Knowledge of CI/CD pipelines (Jenkins, GitHub Actions) for AI model deployment. Strong understanding of AI security principles, including data privacy and governance considerations.
Posted 1 week ago
2.0 - 5.0 years
2 - 5 Lacs
Ahmedabad, Gujarat, India
On-site
Roles and Responsibilities: Technical Skills: Good knowledge of Linux and Cloud environments (AWS, Azure) Able to write scripts in Python, Shell, Ruby etc Should be aware of Docker, Kubernetes, Terraform, or similar technologies(Nice to have) Ability to use version control tools(Git, Gitlab) Knowledge of monitoring tools like Nginx, Instana, AppDynamics etc Able to document every action so findings turn into repeatable actions and then into automation Keep the production environment up and running Design, build and maintain core infrastructure pieces Able to debug and fix issues in case of failures within SLA Passionate about learning new technologies as per project requirements Excellent thinking and problem solving skills Closely working with customers and internal teams to follow the processes Ready to work in 24*7 operational environment(rotational bases) Qualification: B.E, B.Tech, MCA, Diploma Computer/IT Azure/AWS/Linux certification(nice to have)
Posted 2 weeks ago
4.0 - 7.0 years
1 - 2 Lacs
Chennai
Work from Office
This is an urgent and fast filling position - Need immediate joiners OR less than 1 month notice period AI/ML Engineer Location: Chennai Job Summary: We are looking for a Senior AI/ML Engineer to develop, optimize, and deploy machine learning models for real-world applications. You will work on end-to-end ML pipelines , collaborate with cross-functional teams, and apply AI techniques such as NLP, Computer Vision, and Time-Series Forecasting . This role offers opportunities to work on cutting-edge AI solutions while growing your expertise in model deployment and optimization. Role & responsibilities Key Responsibilities: Design, build, and optimize machine learning models for various business applications. Develop and maintain ML pipelines , including data preprocessing, feature engineering, and model training. Work with TensorFlow, PyTorch, Scikit-learn, and Keras for model development. Deploy ML models in cloud environments (AWS, Azure, GCP) and work with Docker/Kubernetes for containerization. Perform model evaluation, hyperparameter tuning, and performance optimization . Collaborate with data scientists, engineers, and product teams to deliver AI-driven solutions. Stay up to date with the latest advancements in AI/ML and implement best practices. Write clean, scalable, and well-documented code in Python or R. Technical Skills: Programming Languages: Proficiency in languages like Python. Python is particularly popular for developing ML models and AI algorithms due to its simplicity and extensive libraries like NumPy, Pandas, and Scikit-learn. Machine Learning Algorithms: Should have a deep understanding of supervised learning (linear regression, decision trees, SVM), unsupervised learning, and reinforcement learning. Data Management and Analysis: Skills in data cleaning, feature engineering, and data transformation are crucial. Deep Learning: Familiarity with neural networks, CNNs, RNNs, and other architectures is important. Machine Learning Frameworks and Libraries: Experience with TensorFlow, PyTorch, Keras, or Scikit-learn is valuable. Natural Language Processing (NLP): Familiarity with NLP techniques like word2vec, sentiment analysis, and summarization can be beneficial. Cloud Computing: Experience with cloud-based services like AWS SageMaker, Google Cloud AI Platform, or Microsoft Azure Machine Learning. Data Preprocessing: Skills in handling missing data, data normalization, feature scaling, and data transformation. Feature Engineering: Ability to create new features from existing data to improve model performance. Data Visualization: Familiarity with visualization tools like Matplotlib, Seaborn, Plotly, or Tableau. Containerization: Knowledge of containerization tools like Docker and Kubernetes. Databases : Understanding of relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB). Data Warehousing: Familiarity with data warehousing concepts and tools like Amazon Redshift or Google BigQuery. Computer Vision: Understanding of computer vision concepts and techniques like object detection, segmentation, and image classification. Reinforcement Learning: Knowledge of reinforcement learning concepts and techniques like Q-learning and policy gradients.
Posted 3 weeks ago
8.0 - 12.0 years
8 - 12 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Summary What will you enjoy in this role Will focus on designing, developing, and supporting all our online data solutions This person will work closely with business Managers to design and build innovative solutions What you ll do We seek Software Engineers with experience building and scaling services in on-premises and cloud environments As a Lead Engineer in the Epsilon Attribution/Forecasting Product Development team, you will play a key role in implementing and optimizing advanced data processing solutions using Scala, Spark, and Hadoop You will collaborate with cross-functional teams to deploy scalable big data solutions on our on-premises and cloud infrastructure Your responsibilities will include building, scheduling, and maintaining complex workflows, as well as performing data integration and transformation tasks You will troubleshoot issues, document processes, and communicate technical concepts clearly to both technical and non-technical stakeholders Additionally, you will focus on continuously enhancing our attribution and forecasting engines, ensuring they effectively meet evolving business needs Strong written and verbal communication skills (in English) are required to facilitate work across multiple countries and time zones Good understanding of Agile Methodologies - SCRUM Qualifications Over 8 years of strong experience in Scala programming and extensive use of Apache Spark for developing and maintaining scalable big data solutions on both on-premises and cloud environments, particularly AWS and GCP Proficient in performance tuning of Spark jobs, optimizing resource usage, shuffling, partitioning, and caching for maximum efficiency Skilled in implementing scalable, fault-tolerant data pipelines with comprehensive monitoring and alerting Hands-on experience with Python for developing infrastructure modules Deep understanding of the Hadoop ecosystem, including HDFS, YARN, and MapReduce Proficient in writing efficient SQL queries for handling large volumes of data in various database systems Experienced in building, scheduling, and maintaining DAG workflows Familiar with data warehousing concepts and technologies Capable of taking end-to-end ownership in defining, developing, and documenting software objectives and requirements in collaboration with stakeholders Experienced with GIT or equivalent source control systems Proficient in developing and implementing unit test cases to ensure code quality and reliability and experienced in utilizing integration testing frameworks to validate system interactions Effective collaborator with stakeholders and teams to understand requirements and develop solutions Ability to work within tight deadlines, prioritize tasks effectively, and perform under pressure Experience in mentoring junior staff Advantageous to have experience on below: Hands-on with Databricks for unified data analytics, including Databricks Notebooks, Delta Lake, and Catalogues Proficiency in using the ELK (Elasticsearch, Logstash, Kibana) stack for real-time search, log analysis, and visualization Strong background in analytics, including the ability to derive actionable insights from large datasets and support data-driven decision-making Experience with data visualization tools like Tableau, Power BI, or Grafana Familiarity with Docker for containerization and Kubernetes for orchestratio
Posted 3 weeks ago
3.0 - 4.0 years
22 - 25 Lacs
Bengaluru
Work from Office
Key Responsibilities AI Model Deployment & Integration: Deploy and manage AI/ML models, including traditional machine learning and GenAI solutions (e.g., LLMs, RAG systems). Implement automated CI/CD pipelines for seamless deployment and scaling of AI models. Ensure efficient model integration into existing enterprise applications and workflows in collaboration with AI Engineers. Optimize AI infrastructure for performance and cost efficiency in cloud environments (AWS, Azure, GCP). Monitoring & Performance Management: Develop and implement monitoring solutions to track model performance, latency, drift, and cost metrics. Set up alerts and automated workflows to manage performance degradation and retraining triggers. Ensure responsible AI by monitoring for issues such as bias, hallucinations, and security vulnerabilities in GenAI outputs. Collaborate with Data Scientists to establish feedback loops for continuous model improvement. Automation & MLOps Best Practices: Establish scalable MLOps practices to support the continuous deployment and maintenance of AI models. Automate model retraining, versioning, and rollback strategies to ensure reliability and compliance. Utilize infrastructure-as-code (Terraform, CloudFormation) to manage AI pipelines. Security & Compliance: Implement security measures to prevent prompt injections, data leakage, and unauthorized model access. Work closely with compliance teams to ensure AI solutions adhere to privacy and regulatory standards (HIPAA, GDPR). Regularly audit AI pipelines for ethical AI practices and data governance. Collaboration & Process Improvement: Work closely with AI Engineers, Product Managers, and IT teams to align AI operational processes with business needs. Contribute to the development of AI Ops documentation, playbooks, and best practices. Continuously evaluate emerging GenAI operational tools and processes to drive innovation. Qualifications & Skills Education: Bachelors or Masters degree in Computer Science, Data Engineering, AI, or a related field. Relevant certifications in cloud platforms (AWS, Azure, GCP) or MLOps frameworks are a plus. Experience: 3+ years of experience in AI/ML operations, MLOps, or DevOps for AI-driven solutions. Hands-on experience deploying and managing AI models, including LLMs and GenAI solutions, in production environments. Experience working with cloud AI platforms such as Azure AI, AWS SageMaker, or Google Vertex AI. Technical Skills: Proficiency in MLOps tools and frameworks such as MLflow, Kubeflow, or Airflow. Hands-on experience with monitoring tools (Prometheus, Grafana, ELK Stack) for AI performance tracking. Experience with containerization and orchestration tools (Docker, Kubernetes) to support AI workloads. Familiarity with automation scripting using Python, Bash, or PowerShell. Understanding of GenAI-specific operational challenges such as response monitoring, token management, and prompt optimization. Knowledge of CI/CD pipelines (Jenkins, GitHub Actions) for AI model deployment. Strong understanding of AI security principles, including data privacy and governance considerations.
Posted 4 weeks ago
8.0 - 10.0 years
30 - 35 Lacs
Pune
Work from Office
Are you an expert at optimizing Cassandra for speed, reliability, and resilience? Do you excel at designing and scaling high-performance distributed databases? Were seeking a seasoned architect to play a key role in designing and maintaining robust database environments and leading integration initiatives across platforms. In this role, youll be at the forefront of building high-performing, scalable, and secure systems that support seamless data exchange leveraging APIs, event-driven flows, and real-time messaging. Your contributions will power automation, performance, and data-driven decision-making across the enterprise. What You'll Do Database Architecture & Operations: Architect, deploy, and monitor distributed database clusters (MySQL, Cassandra, MongoDB, PostgreSQL), ensuring high availability and performance. Perform advanced tuning, query optimization, and proactive issue resolution. Design and implement strategies for database backup, replication, disaster recovery, and data lifecycle management. Support MySQL systems for operational scalability and enterprise-level performance. Enterprise Integration & EDI Framework: Lead and support a robust EDI/EI framework to integrate backend systems through real-time APIs and asynchronous data pipelines. Standardize data contracts, routing logic, transformation processes, and messaging patterns across systems and partners. Replace legacy batch processing with scalable, event-based integrations using messaging frameworks (e.g., Kafka). API & Event-Driven Architecture: Design and optimize RESTful APIs and event-based services to enable real-time data exchange across internal and third-party systems. Build and manage ETL data flows into microservices or message queues abstracting from underlying source technologies. Data Pipeline Optimization & Automation: Identify bottlenecks in data pipelines and drive performance improvements through modeling and intelligent automation. Use AI/ML where applicable to structure unstructured data and enhance automation in transformation, reporting, and alerting workflows. Cloud & DevOps Enablement: Architect and manage cloud-native database deployments on AWS, Azure, or GCP using automation tools (Terraform, Ansible). Collaborate with DevOps teams to integrate database schema/version control and data integration into CI/CD pipelines. Ensure secure, compliant, and cost-optimized cloud architecture. Enterprise Programs & Collaboration: Contribute to strategic initiatives such as the NexGen Program, ensuring that database and integration designs align with performance, scalability, and long-term architectural goals. Work closely with application architects, developers, and QA teams to co-design and review data-driven solutions. What Youll Need Bachelors Degree in Computer Science or a related field. 8–10 years of experience in database architecture, administration, and integrations. Proven hands-on experience with Cassandra DB, MongoDB, MySQL, PostgreSQL. Certifications in database management or NoSQL databases are preferred. Expertise in database setup, performance tuning, and issue resolution. Strong knowledge of SQL/CQL, data modeling, and distributed systems architecture. Hands-on experience with multi-data-center deployments and monitoring tools such as New Relic. Experience in scripting and automation (Python, Java, Bash, Ansible). Experience managing Cassandra in cloud environments (AWS, GCP, or Azure). Familiarity with DevOps practices, containerization tools (Docker/Kubernetes), and data streaming platforms (Kafka, Spark). Experience standardizing enterprise integration approaches across business units. Solid understanding of data privacy, security, and regulatory compliance. Fluency in English (written and spoken) is required, as it is the corporate language across our global teams. Here’s What We Offer At Scan-IT, we pride ourselves on our vibrant and supportive culture. Join our dynamic, international team and take on meaningful responsibilities from day one. Innovative Environment : Explore new technologies in the transportation and logistics industry. Collaborative Culture : Work with some of the industry’s best in an open and creative environment. Professional Growth : Benefit from continuous learning, mentorship, and career advancement. Impactful Work : Enhance efficiency and drive global success. Inclusive Workplace : Thrive in a diverse and supportive environment. Competitive Compensation : Receive a salary that reflects your expertise. Growth Opportunities : Achieve your full potential with ample professional and personal development opportunities. Join Scan-IT and be part of a team that’s shaping the future of the transportation and logistics industry. Visit www.scan-it.com.sg and follow us on LinkedIn, Facebook and X.
Posted 1 month ago
7 - 12 years
1 - 2 Lacs
Chennai
Work from Office
This is an urgent and fast filling position - Need immediate joiners OR less than 1 month notice period Senior AI/ML Engineer Location: Chennai Experience: 7+ years Job Summary: We are looking for a Senior AI/ML Engineer to develop, optimize, and deploy machine learning models for real-world applications. You will work on end-to-end ML pipelines , collaborate with cross-functional teams, and apply AI techniques such as NLP, Computer Vision, and Time-Series Forecasting . This role offers opportunities to work on cutting-edge AI solutions while growing your expertise in model deployment and optimization. Role & responsibilities Preferred candidate profile Design, build, and optimize machine learning models for various business applications. Develop and maintain ML pipelines , including data preprocessing, feature engineering, and model training. Work with TensorFlow, PyTorch, Scikit-learn, and Keras for model development. Deploy ML models in cloud environments (AWS, Azure, GCP) and work with Docker/Kubernetes for containerization. Perform model evaluation, hyperparameter tuning, and performance optimization . Collaborate with data scientists, engineers, and product teams to deliver AI-driven solutions. Stay up to date with the latest advancements in AI/ML and implement best practices. Write clean, scalable, and well-documented code in Python or R.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6462 Jobs | Ahmedabad
Amazon
6351 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane