Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
6.0 years
0 Lacs
Amritsar, Punjab, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
6.0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You will be joining a company that values innovation and maintains an open, friendly culture while benefiting from the support of a well-established parent company with a strong ethical reputation. The company is dedicated to guiding customers towards the future by leveraging the potential of their data and applications to address digital challenges, ultimately delivering positive outcomes for both business and society. As an Infor M3 Support professional, you will play a crucial role in providing technical and functional support for the Infor M3 Cloud platform. Your responsibilities will include expertise in M3 integrations, data engineering, analytics, and cloud technologies. You will be primarily involved in L2/L3 technical support, minor enhancements, API management, and data pipeline optimizations to ensure smooth business operations across various functions such as Manufacturing, Supply Chain, Procurement, Sales, and Finance within the Food & Beverage industry. Key Requirements: - Proficiency in API Services & Integration Management, including platforms such as Azure, AWS, Kafka, and EDI - Ability to design and maintain data pipelines using tools like Azure Synapse, Databricks, or AWS Glue - Experience in supporting the personalization of M3 UI elements such as Homepages, Smart Office, Enterprise Search, and XtendM3 If you have a minimum of 8 years of experience and can join immediately, this role based in PAN India (Work From Office) could be the next step in your career.,
Posted 4 days ago
6.0 years
0 Lacs
Greater Lucknow Area
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
6.0 years
0 Lacs
Thane, Maharashtra, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
6.0 years
0 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
6.0 years
0 Lacs
Nashik, Maharashtra, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
6.0 years
0 Lacs
Kochi, Kerala, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
6.0 years
0 Lacs
Greater Bhopal Area
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
6.0 years
0 Lacs
Visakhapatnam, Andhra Pradesh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
8.0 - 12.0 years
10 - 20 Lacs
Chennai
Work from Office
About role : We're seeking a skilled Manager/Senior Manager of Data & Cloud Platform Operations to lead our teams and promote architectural excellence. This key role involves driving the strategy and execution of Data Engineering on the Cloud. An ideal candidate would have 8-12 years of experience in IT consulting focused on AI & Analytics on Microsoft Azure. Responsibilities Lead and manage teams responsible for design, development, and operational excellence of scalable data pipelines and data warehouses. Drive the maturity and adoption of advanced DevOps practices, including CI/CD, Infrastructure as Code (IaC), automation, comprehensive monitoring, and Site Reliability Engineering (SRE) for all cloud services. Establish and enforce stringent Security and Compliance standards across all cloud infrastructure, data operations, and applications, ensuring adherence to industry regulations and best practices. Oversee the strategic planning, deployment, and lifecycle management of Azure Infrastructure, optimizing for performance, cost, resilience, and high availability. Collaborate closely with software engineering and data science teams to ensure seamless integration and operational readiness of AI & Analytics solutions. Mentor, coach, and develop a high-performing team of DevOps engineers, data engineers, and cloud architects. Qualifications Bachelor's or Master's degree in Computer Science, or a related field. 8-12 years of experience in IT operations, data engineering, or cloud infrastructure roles, with at least 3-5 years in a Manager/Architect role. Extensive hands-on experience with Microsoft Azure services, including IaaS, PaaS, networking, security, and data services. Proven expertise in designing and managing Data Engineering Pipelines and Data Warehouses , with strong practical experience using PySpark and Kafka . Proven ability to design High-Level Architecture and create Low-Level System Designs Deep understanding and practical experience with DevOps methodologies and tools (e.g., Azure DevOps, Terraform, Ansible, Kubernetes). Strong knowledge of Security and Compliance frameworks (e.g., ISO 27001, GDPR, HIPAA) and their implementation in cloud environments. Demonstrable experience in leading platform Scaling and Cloud Cost Optimization initiatives. Advanced proficiency in SQL and Python for operational and data engineering tasks. Azure certifications (e.g., Azure Solutions Architect Expert) are highly desirable. Exceptional analytical, problem-solving, and communication skills.
Posted 4 days ago
2.0 years
0 Lacs
Delhi, India
Remote
About xAI xAI's mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company's mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates. About the Role As a Software Engineering Specialist on the Human Data team, you will be responsible for creating cutting-edge data to facilitate the training of large language models. Collaborating closely with technical staff, you will contribute to datasets for model training, benchmarking, and overall advancement. The Software Engineering Specialist - Human Data role is a full-time remote position. Part-time may be offered on a case-by-case basis but full-time is strongly preferred (please see the bottom of this job description for more details). Responsibilities AI model training initiatives by curating code examples, offering precise solutions, and meticulous corrections in Python, JavaScript (including ReactJS), C/C++, and Java. Evaluate and refine AI-generated code, ensuring it adheres to industry standards for efficiency, scalability, and reliability. Collaborate with cross-functional teams to enhance AI-driven coding solutions, ensuring they meet enterprise-level quality and performance benchmarks. Key Qualifications Advanced proficiency in English, both verbal and written. Strong experience in either Python or JavaScript, with a solid foundation in software development practices. Please note that for those with experience in only JavaScript, experience with ReactJS is preferred but not required. Knowledge of other languages is a strong plus.Strong grasp of computer science fundamentals like data structures, algorithms, and debugging skills. A minimum of 2 years of hands-on industry experience with a proven track record in software development and/or public proof of work (such as on GitHub). Extensive experience with a wide array of tools and systems such as Databases, SQL, Kubernetes, Spark, Kafka, gRPC, and AWS. Preferred Qualifications The ideal candidate for this role is adaptable, possesses strong logical reasoning skills, is detail-oriented, and thrives in a fast-paced work environment. Evidence of meaningful contributions to open source projects or high reputation on platforms like Stack Overflow or evidence of strong performance in programming competitions. Enthusiasm to collaboratively build the best truth-seeking AI out there! Additional Requirements Demonstrates a strong capacity to quickly adapt by learning new skills and unlearning outdated ones, thriving in dynamic and changing environments. For those who will be working from a personal device, please note your computer must be capable of running Windows 10 or macOS BigSur 11.0 or later. Location, Hourly, and Other Expectations This position is fully remote. We are unable to provide visa sponsorship. If you are based in the US, please note we are unable to hire in the states of Wyoming and Illinois at this time. You must own and have reliable access to a smartphone. Please indicate your interest in either full-time, part-time, or either in the application. Note that: Full-Time (40 hours per week): Full-time schedules are 9-5:30pm in your local time zone. The first week will be 9-5:30pm PST for onboarding. Part-Time (20-29 hours per week): While hours are flexible around your schedule, you must be committed to working at least 20 hours per week (with at least 10 of these hours worked on weekdays) and no more than 29 hours per week. Compensation and Benefits The pay for this role may range from $55/hour - $65/hour. Your actual pay will be determined on a case-by-case basis and may vary based on the following considerations: job-related knowledge and skills, education, and experience. For full-time roles, specific benefits vary by country, depending on your country of residence you may have access to medical benefits. We do not offer benefits for part-time roles. xAI is an equal opportunity employer and does not unlawfully discriminate based on race, color, religion, ethnicity, ancestry, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, disability, medical conditions, genetic information, marital status, military or veteran status, or any other applicable legally protected characteristics. Qualified applicants with arrest or conviction records will be considered for employment in accordance with all applicable federal, state, and local laws, including the San Francisco Fair Chance Ordinance, Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For Los Angeles County (unincorporated) Candidates: xAI reasonably believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: Access to information technology systems and confidential information, including proprietary and trade secret information, and/or user data; Interacting with internal and/or external clients and colleagues; and Exercising sound judgment. California Consumer Privacy Act (CCPA) Notice
Posted 4 days ago
2.0 years
0 Lacs
Telangana, India
Remote
About xAI xAI's mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company's mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates. About the Role As a Software Engineering Specialist on the Human Data team, you will be responsible for creating cutting-edge data to facilitate the training of large language models. Collaborating closely with technical staff, you will contribute to datasets for model training, benchmarking, and overall advancement. The Software Engineering Specialist - Human Data role is a full-time remote position. Part-time may be offered on a case-by-case basis but full-time is strongly preferred (please see the bottom of this job description for more details). Responsibilities AI model training initiatives by curating code examples, offering precise solutions, and meticulous corrections in Python, JavaScript (including ReactJS), C/C++, and Java. Evaluate and refine AI-generated code, ensuring it adheres to industry standards for efficiency, scalability, and reliability. Collaborate with cross-functional teams to enhance AI-driven coding solutions, ensuring they meet enterprise-level quality and performance benchmarks. Key Qualifications Advanced proficiency in English, both verbal and written. Strong experience in either Python or JavaScript, with a solid foundation in software development practices. Please note that for those with experience in only JavaScript, experience with ReactJS is preferred but not required. Knowledge of other languages is a strong plus.Strong grasp of computer science fundamentals like data structures, algorithms, and debugging skills. A minimum of 2 years of hands-on industry experience with a proven track record in software development and/or public proof of work (such as on GitHub). Extensive experience with a wide array of tools and systems such as Databases, SQL, Kubernetes, Spark, Kafka, gRPC, and AWS. Preferred Qualifications The ideal candidate for this role is adaptable, possesses strong logical reasoning skills, is detail-oriented, and thrives in a fast-paced work environment. Evidence of meaningful contributions to open source projects or high reputation on platforms like Stack Overflow or evidence of strong performance in programming competitions. Enthusiasm to collaboratively build the best truth-seeking AI out there! Additional Requirements Demonstrates a strong capacity to quickly adapt by learning new skills and unlearning outdated ones, thriving in dynamic and changing environments. For those who will be working from a personal device, please note your computer must be capable of running Windows 10 or macOS BigSur 11.0 or later. Location, Hourly, and Other Expectations This position is fully remote. We are unable to provide visa sponsorship. If you are based in the US, please note we are unable to hire in the states of Wyoming and Illinois at this time. You must own and have reliable access to a smartphone. Please indicate your interest in either full-time, part-time, or either in the application. Note that: Full-Time (40 hours per week): Full-time schedules are 9-5:30pm in your local time zone. The first week will be 9-5:30pm PST for onboarding. Part-Time (20-29 hours per week): While hours are flexible around your schedule, you must be committed to working at least 20 hours per week (with at least 10 of these hours worked on weekdays) and no more than 29 hours per week. Compensation and Benefits The pay for this role may range from $55/hour - $65/hour. Your actual pay will be determined on a case-by-case basis and may vary based on the following considerations: job-related knowledge and skills, education, and experience. For full-time roles, specific benefits vary by country, depending on your country of residence you may have access to medical benefits. We do not offer benefits for part-time roles. xAI is an equal opportunity employer and does not unlawfully discriminate based on race, color, religion, ethnicity, ancestry, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, disability, medical conditions, genetic information, marital status, military or veteran status, or any other applicable legally protected characteristics. Qualified applicants with arrest or conviction records will be considered for employment in accordance with all applicable federal, state, and local laws, including the San Francisco Fair Chance Ordinance, Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For Los Angeles County (unincorporated) Candidates: xAI reasonably believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: Access to information technology systems and confidential information, including proprietary and trade secret information, and/or user data; Interacting with internal and/or external clients and colleagues; and Exercising sound judgment. California Consumer Privacy Act (CCPA) Notice
Posted 4 days ago
10.0 years
0 Lacs
Telangana, India
On-site
VP Product Engineering – India Products leader Our company: At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers and our customers’ customers to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise. What You Will Do: As the Vice President of Engineering, you will lead Teradata’s India-based software development organization for our AI Platform Group (Analytics, AI, Apps, Agents and UX) through its next phase of transformation. You will own the execution of our product roadmap across key technologies including Vector Store, Agent platform, Apps, user experience, and the enablement of AI/ML-driven use-cases at scale. Success in this role means building a world-class engineering culture, attracting and retaining top technical talent, accelerating hybrid cloud-first product delivery, and fostering innovation that drives measurable value for our customers. You will work at the pace of a startup and build as well as lead a team of 150+ engineers focused on the key objective of helping customers achieve outcomes with Data and AI. You will partner closely with Product Management, Product Operations, Security, Customer Success, and Executive Leadership. Who You Will Work With: You will lead a high-impact regional team of up to 500 people, comprised of software development, cloud engineering, DevOps, engineering operations, and architecture teams. You will partner closely with Product Management, Product Operations, Security, Customer Success, and Executive Leadership. This position will work closely with the various other regional and global leaders. What Makes You a Qualified Candidate: 10+ years of proven senior leadership in product development, engineering, or technology leadership within enterprise software product companies. 3+ years in a VP Product or equivalent role managing large-scale, distributed technical teams in a growth market. Proven experience leading the development of agentic AI and driving AI at scale in a hybrid cloud environment. Demonstrated success in leading teams through transformation with measurable impacts. Demonstrated success implementing and scaling Agile and DevSecOps methodologies. Progressive leadership experience building and leading large technical teams managing multiple high visibility priorities. What You Will Bring: Understanding of cloud platforms, data harmonization, and data analytics for AI. Deep understanding of cloud platforms, Kubernetes, containerization, and microservices-based architectures. Experience delivering SaaS-based data and analytics platforms, ideally involving hybrid and multi-cloud deployments. Knowledge of modern data stack technologies such as Apache Spark, Kafka, Presto/Trino, Delta Lake, or Iceberg. Familiarity with AI/ML infrastructure, model lifecycle management, and integration with data platforms. Strong background in enterprise security, data governance, performance engineering, and API-first design. Experience modernizing legacy architectures into modern, service-based systems using CI/CD and automation. Passion for open-source collaboration and building extensible, developer-friendly ecosystems. Track record of building high-performing engineering cultures and inclusive leadership teams. Ability to inspire, influence, and collaborate with internal and external stakeholders at the highest levels. Master’s degree in engineering, Computer Science, or MBA preferred Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status.
Posted 4 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of a Software Engineer (A2) : Design and develop AI-driven data ingestion frameworks and real-time processing solutions that enhance data analysis and machine learning capabilities across the full technology stack. Deploy, maintain, and support application codes and machine learning models in production environments, ensuring seamless integration with front-end and back-end systems. Create and enhance AI solutions that facilitate seamless integration and flow of data across the data ecosystem, enabling advanced analytics and insights for end users. Conduct business analysis to gather requirements and develop ETL processes, scripts, and machine learning pipelines that meet technical specifications and business needs, utilizing both server-side and client-side technologies. Develop real-time data ingestion and stream-analytic solutions utilizing technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, and cloud platforms to support AI applications. Utilize multiple programming languages and tools, including Python, Spark, Hive, Presto, Java, and JavaScript frameworks (e.g., React, Angular) to build prototypes for AI models and evaluate their effectiveness and feasibility. Develop application systems that adhere to standard software development methodologies, ensuring robust design, programming, backup, and recovery processes to deliver high-performance AI solutions across the full stack. Provide system support as part of a team rotation, collaborating with other engineers to resolve issues and enhance system performance, including both front-end and back-end components. Operationalize open-source AI and data-analytic tools for enterprise-scale applications, ensuring they align with organizational needs and user interfaces. Ensure compliance with data governance policies by implementing and validating data lineage, quality checks, and data classification in AI projects. Understand and follow the company’s software development lifecycle to effectively develop, deploy, and deliver AI solutions. Design and develop AI frameworks leveraging open-source tools and advanced data processing frameworks, integrating them with user-facing applications. Lead the design and execution of complex AI projects, ensuring alignment with ethical guidelines and principles under the guidance of senior team members. Mandatory technical & functional skills Technical Skills: Strong proficiency in on Python, Java, C++ and , as well as familiarity with machine learning frameworks (e.g., TensorFlow, PyTorch). In depth knowledge on ML, Deep Learning and NLP algorithms. Strong programming skills hands on experience in building backend services with frameworks like FastAPI, Flask, Django, etc. Full-Stack Development: Proficiency in front-end and back-end technologies, including JavaScript frameworks (e.g., React, Angular), to build and integrate user interfaces with AI models and data solutions. Data Integration: Develop and maintain data pipelines for AI applications, ensuring efficient data extraction, transformation, and loading (ETL) processes Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Preferred Technical & Functional Skills Big Data Processing: Utilize big data technologies such as Azure Databricks and Apache Spark to handle, analyze, and process large datasets for machine learning and AI applications. Develop real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python and Hadoop Platform and any Cloud Data Platform. Certifications: Relevant certifications such as Microsoft Certified: Azure Data Engineer Associate, Azure AI Engineer or any other cloud certification are a plus. Key behavioral attributes/requirements Collaborative Learning: Open to learning and working with others. Project Responsibility: Able to manage project components beyond individual tasks. Business Acumen: Strive to understand business objectives driving data needs. RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of a Software Engineer (A2) : Design and develop AI-driven data ingestion frameworks and real-time processing solutions that enhance data analysis and machine learning capabilities across the full technology stack. Deploy, maintain, and support application codes and machine learning models in production environments, ensuring seamless integration with front-end and back-end systems. Create and enhance AI solutions that facilitate seamless integration and flow of data across the data ecosystem, enabling advanced analytics and insights for end users. Conduct business analysis to gather requirements and develop ETL processes, scripts, and machine learning pipelines that meet technical specifications and business needs, utilizing both server-side and client-side technologies. Develop real-time data ingestion and stream-analytic solutions utilizing technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, and cloud platforms to support AI applications. Utilize multiple programming languages and tools, including Python, Spark, Hive, Presto, Java, and JavaScript frameworks (e.g., React, Angular) to build prototypes for AI models and evaluate their effectiveness and feasibility. Develop application systems that adhere to standard software development methodologies, ensuring robust design, programming, backup, and recovery processes to deliver high-performance AI solutions across the full stack. Provide system support as part of a team rotation, collaborating with other engineers to resolve issues and enhance system performance, including both front-end and back-end components. Operationalize open-source AI and data-analytic tools for enterprise-scale applications, ensuring they align with organizational needs and user interfaces. Ensure compliance with data governance policies by implementing and validating data lineage, quality checks, and data classification in AI projects. Understand and follow the company’s software development lifecycle to effectively develop, deploy, and deliver AI solutions. Design and develop AI frameworks leveraging open-source tools and advanced data processing frameworks, integrating them with user-facing applications. Lead the design and execution of complex AI projects, ensuring alignment with ethical guidelines and principles under the guidance of senior team members. Mandatory technical & functional skills Technical Skills: Strong proficiency in on Python, Java, C++ and , as well as familiarity with machine learning frameworks (e.g., TensorFlow, PyTorch). In depth knowledge on ML, Deep Learning and NLP algorithms. Strong programming skills hands on experience in building backend services with frameworks like FastAPI, Flask, Django, etc. Full-Stack Development: Proficiency in front-end and back-end technologies, including JavaScript frameworks (e.g., React, Angular), to build and integrate user interfaces with AI models and data solutions. Data Integration: Develop and maintain data pipelines for AI applications, ensuring efficient data extraction, transformation, and loading (ETL) processes Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Preferred Technical & Functional Skills Big Data Processing: Utilize big data technologies such as Azure Databricks and Apache Spark to handle, analyze, and process large datasets for machine learning and AI applications. Develop real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python and Hadoop Platform and any Cloud Data Platform. Certifications: Relevant certifications such as Microsoft Certified: Azure Data Engineer Associate, Azure AI Engineer or any other cloud certification are a plus. Key behavioral attributes/requirements Collaborative Learning: Open to learning and working with others. Project Responsibility: Able to manage project components beyond individual tasks. Business Acumen: Strive to understand business objectives driving data needs. #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications Bachelor’s / Master’s degree in Computer Science Work Experience 2 to 4 years of Software Engineering experience
Posted 4 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
VP Product Engineering – India Products leader Our company: At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers and our customers’ customers to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise. What You Will Do: As the Vice President of Engineering, you will lead Teradata’s India-based software development organization for our AI Platform Group (Analytics, AI, Apps, Agents and UX) through its next phase of transformation. You will own the execution of our product roadmap across key technologies including Vector Store, Agent platform, Apps, user experience, and the enablement of AI/ML-driven use-cases at scale. Success in this role means building a world-class engineering culture, attracting and retaining top technical talent, accelerating hybrid cloud-first product delivery, and fostering innovation that drives measurable value for our customers. You will work at the pace of a startup and build as well as lead a team of 150+ engineers focused on the key objective of helping customers achieve outcomes with Data and AI. You will partner closely with Product Management, Product Operations, Security, Customer Success, and Executive Leadership. Who You Will Work With: You will lead a high-impact regional team of up to 500 people, comprised of software development, cloud engineering, DevOps, engineering operations, and architecture teams. You will partner closely with Product Management, Product Operations, Security, Customer Success, and Executive Leadership. This position will work closely with the various other regional and global leaders. What Makes You a Qualified Candidate: 10+ years of proven senior leadership in product development, engineering, or technology leadership within enterprise software product companies. 3+ years in a VP Product or equivalent role managing large-scale, distributed technical teams in a growth market. Proven experience leading the development of agentic AI and driving AI at scale in a hybrid cloud environment. Demonstrated success in leading teams through transformation with measurable impacts. Demonstrated success implementing and scaling Agile and DevSecOps methodologies. Progressive leadership experience building and leading large technical teams managing multiple high visibility priorities. What You Will Bring: Understanding of cloud platforms, data harmonization, and data analytics for AI. Deep understanding of cloud platforms, Kubernetes, containerization, and microservices-based architectures. Experience delivering SaaS-based data and analytics platforms, ideally involving hybrid and multi-cloud deployments. Knowledge of modern data stack technologies such as Apache Spark, Kafka, Presto/Trino, Delta Lake, or Iceberg. Familiarity with AI/ML infrastructure, model lifecycle management, and integration with data platforms. Strong background in enterprise security, data governance, performance engineering, and API-first design. Experience modernizing legacy architectures into modern, service-based systems using CI/CD and automation. Passion for open-source collaboration and building extensible, developer-friendly ecosystems. Track record of building high-performing engineering cultures and inclusive leadership teams. Ability to inspire, influence, and collaborate with internal and external stakeholders at the highest levels. Master’s degree in engineering, Computer Science, or MBA preferred Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status.
Posted 4 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location: Pune (Work From Office) Experience Required: 4–8 Years Notice Period: Immediate to 15 Days Only Employment Type: Contract About The Role We’re hiring a Principal Scala Developer with strong expertise in Akka or LAGOM , and practical experience building real-time, distributed systems . This role demands a deep understanding of microservices architecture , containerized environments, and tools like Apache Pulsar , ElasticSearch , and Kubernetes . You'll work on building scalable backend systems that power data-intensive applications, collaborating with a team that values high performance, innovation, and clean code. Key Responsibilities Develop and maintain scalable microservices using Scala, Akka, and/or LAGOM. Build containerized applications using Docker and orchestrate them with Kubernetes (K8s). Manage real-time messaging with Apache Pulsar. Integrate with databases using the Slick Connector and PostgreSQL. Enable search and analytics features using ElasticSearch. Work with GitLab CI/CD pipelines to streamline deployment workflows. Collaborate across teams and write clean, well-structured, and maintainable code. Must-Have Skills 4–8 years of development experience with Scala. Expertise in Akka or LAGOM frameworks. Strong knowledge of microservice architecture and distributed systems. Proficiency with Docker and Kubernetes. Hands-on experience with Apache Pulsar, PostgreSQL, and ElasticSearch. Familiarity with GitLab, CI/CD pipelines, and deployment processes. Strong software engineering and documentation skills. Good to Have Experience with Kafka or RabbitMQ. Exposure to monitoring and logging tools (Prometheus, Grafana, ELK stack). Basic understanding of frontend frameworks like React or Angular. Familiarity with cloud platforms (AWS, GCP, or Azure). Prior experience in domains such as finance, logistics, or real-time data processing. Educational Background Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Why Join Us Work on real-world, high-performance systems with modern architecture. Be part of a collaborative, growth-oriented environment. Access to cutting-edge tools, infrastructure, and learning resources. Opportunities for long-term growth, upskilling, and mentorship. Enjoy a healthy work-life balance with onsite amenities and team event Skills: akka,postgresql,scala,elasticsearch,microservices architecture,lagom,apache pulsar,distributed systems,docker,kubernetes,ci/cd,gitlab
Posted 4 days ago
2.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for intermediate full-stack software engineers who are passionate about solving business problems through innovation and engineering practices. This role will be responsible for implementation of new or revised application systems and programs in coordination with the Technology team, paired with other developers as appropriate while working as a strong contributor on an agile team. From a technical standpoint, the Software Engineer has full-stack coding and implementation responsibilities and adheres to best practice principles, including modern cloud-based software development, agile and scrum, code quality, and tool usage. Qualifications: 2-10 years of relevant experience in the Financial Service industry Knowledge and development experience in backend technologies: Java, Spring Boot, Kafka, Microservice Architecture, Multithreading Knowledge and development experience in database technologies: Oracle, MongoDB Knowledge and development experience in frontend technologies: Angular, TypeScript/JavaScript, HTML, CSS3/SASS Experience in containerization technologies: OpenShift, Kubernetes, Docker, AWS, etc. Familiarity with DevOps concepts, tools and continuous delivery pipelines: Git, Bitbucket, Jenkins, uDeploy, Tekton, Harness, Jira, etc. Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 4 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location: Pune (Work From Office) Experience Required: 4–8 Years Notice Period: Immediate to 15 Days Only Employment Type: Contract About The Role We’re hiring a Scala Developer - Event Streaming & Microservices with strong expertise in Akka or LAGOM , and practical experience building real-time, distributed systems . This role demands a deep understanding of microservices architecture , containerized environments, and tools like Apache Pulsar , ElasticSearch , and Kubernetes . You'll work on building scalable backend systems that power data-intensive applications, collaborating with a team that values high performance, innovation, and clean code. Key Responsibilities Develop and maintain scalable microservices using Scala, Akka, and/or LAGOM. Build containerized applications using Docker and orchestrate them with Kubernetes (K8s). Manage real-time messaging with Apache Pulsar. Integrate with databases using the Slick Connector and PostgreSQL. Enable search and analytics features using ElasticSearch. Work with GitLab CI/CD pipelines to streamline deployment workflows. Collaborate across teams and write clean, well-structured, and maintainable code. Must-Have Skills 4–8 years of development experience with Scala. Expertise in Akka or LAGOM frameworks. Strong knowledge of microservice architecture and distributed systems. Proficiency with Docker and Kubernetes. Hands-on experience with Apache Pulsar, PostgreSQL, and ElasticSearch. Familiarity with GitLab, CI/CD pipelines, and deployment processes. Strong software engineering and documentation skills. Good to Have Experience with Kafka or RabbitMQ. Exposure to monitoring and logging tools (Prometheus, Grafana, ELK stack). Basic understanding of frontend frameworks like React or Angular. Familiarity with cloud platforms (AWS, GCP, or Azure). Prior experience in domains such as finance, logistics, or real-time data processing. Educational Background Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Why Join Us Work on real-world, high-performance systems with modern architecture. Be part of a collaborative, growth-oriented environment. Access to cutting-edge tools, infrastructure, and learning resources. Opportunities for long-term growth, upskilling, and mentorship. Enjoy a healthy work-life balance with onsite amenities and team events. Skills: akka,gitlab,distributed systems,lagom,ci/cd,microservices,docker,elasticsearch,microservices architecture,apache pulsar,scala,kubernetes,postgresql
Posted 4 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location: Pune (Work From Office) Experience Required: 4–8 Years Notice Period: Immediate to 15 Days Only Employment Type: Contract About The Role We’re hiring a Scala & Relative Systems Engineer with strong expertise in Akka or LAGOM , and practical experience building real-time, distributed systems . This role demands a deep understanding of microservices architecture , containerized environments, and tools like Apache Pulsar , ElasticSearch , and Kubernetes . You'll work on building scalable backend systems that power data-intensive applications, collaborating with a team that values high performance, innovation, and clean code. Key Responsibilities Develop and maintain scalable microservices using Scala, Akka, and/or LAGOM. Build containerized applications using Docker and orchestrate them with Kubernetes (K8s). Manage real-time messaging with Apache Pulsar. Integrate with databases using the Slick Connector and PostgreSQL. Enable search and analytics features using ElasticSearch. Work with GitLab CI/CD pipelines to streamline deployment workflows. Collaborate across teams and write clean, well-structured, and maintainable code. Must-Have Skills 4–8 years of development experience with Scala. Expertise in Akka or LAGOM frameworks. Strong knowledge of microservice architecture and distributed systems. Proficiency with Docker and Kubernetes. Hands-on experience with Apache Pulsar, PostgreSQL, and ElasticSearch. Familiarity with GitLab, CI/CD pipelines, and deployment processes. Strong software engineering and documentation skills. Good to Have Experience with Kafka or RabbitMQ. Exposure to monitoring and logging tools (Prometheus, Grafana, ELK stack). Basic understanding of frontend frameworks like React or Angular. Familiarity with cloud platforms (AWS, GCP, or Azure). Prior experience in domains such as finance, logistics, or real-time data processing. Educational Background Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Why Join Us Work on real-world, high-performance systems with modern architecture. Be part of a collaborative, growth-oriented environment. Access to cutting-edge tools, infrastructure, and learning resources. Opportunities for long-term growth, upskilling, and mentorship. Enjoy a healthy work-life balance with onsite amenities and team events. Skills: akka,gitlab,distributed systems,lagom,docker,microservices architecture,elasticsearch,kubernetes,apache,scala,ci/cd pipelines,apache pulsar,postgresql
Posted 4 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 10+ years Extensive experience in back-end development utilizing Java 8 or higher, Spring Framework (Core/Boot/MVC), Hibernate/JPA, and Microservices Architecture. Experience with messaging systems like Kafka. Hands-on experience with REST APIs, Caching system (e.g Redis) etc. Proficiency in Service-Oriented Architecture (SOA) and Web Services (Apache CXF, JAX-WS, JAX-RS, SOAP, REST). Hands-on experience with multithreading, and cloud development. Strong working experience in Data Structures and Algorithms, Unit Testing, and Object-Oriented Programming (OOP) principles. Hands-on experience with relational databases such as SQL Server, Oracle, MySQL, and PostgreSQL. Experience with DevOps tools and technologies such as Ansible, Docker, Kubernetes, Puppet, Jenkins, and Chef. Proficiency in build automation tools like Maven, Ant, and Gradle. Hands on experience on cloud technologies such as AWS/ Azure. Strong understanding of UML and design patterns. Ability to simplify solutions, optimize processes, and efficiently resolve escalated issues. Strong problem-solving skills and a passion for continuous improvement. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 4 days ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, opportunity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience. Hands-on practical experience in system design, application development, testing, and operational stability. Proficiency in Java/J2EE and REST APIs, Python, Web Services and experience in building event-driven Micro Services and Kafka streaming. Experience in RDBMS and NOSQL database. Working proficiency in developmental toolset like GIT/Bitbucket , Jira, maven. Experience with AWS services. Experience in Spring Framework Services in public cloud infrastructure. Proficiency in automation and continuous delivery methods. Proficient in all aspects of the Software Development Life Cycle Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Overall knowledge of the Software Development Life Cycle. Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Proficiency in software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.). In-depth knowledge financial services industry and their IT systems Preferred Qualifications, Capabilities, And Skills AWS certification is preferred. Experience on cloud engineering including Pivotal Cloud Foundry, AWS. Experience in PERF testing and turning as well as shift left practices. DDD (domain driven design). Experience with MongoDB
Posted 4 days ago
6.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
6.0 years
0 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31300 Jobs | Dublin
Wipro
16502 Jobs | Bengaluru
EY
10539 Jobs | London
Accenture in India
10399 Jobs | Dublin 2
Uplers
8481 Jobs | Ahmedabad
Amazon
8475 Jobs | Seattle,WA
IBM
7957 Jobs | Armonk
Oracle
7438 Jobs | Redwood City
Muthoot FinCorp (MFL)
6169 Jobs | New Delhi
Capgemini
5811 Jobs | Paris,France