Home
Jobs

1622 Clustering Jobs - Page 10

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Who are we? Securin is a leading product based company backed up by services in the cybersecurity domain, helping hundreds of customers world wide gain resilience against emerging threats. Our products are powered by accurate vulnerability intelligence, human expertise, and automation, enabling enterprises to make crucial security decisions to manage their expanding attack surfaces. Securin is built on the foundation of in-depth penetration testing and vulnerability research to help organizations continuously improve their security posture. Our team of intelligence experts is one of the best in the industry and our comprehensive portfolio of tech-enabled solutions include Attack Surface Management (ASM), Vulnerability Intelligence (VI), Penetration Testing, and Vulnerability Management. These solutions allow our customers to gain complete visibility of their attack surfaces, stay informed of the latest security threats. Also, trends, and proactively address risks. What do we promise? We are a highly effective tech-enabled cybersecurity solutions provider and promise continual security posture improvement, enhanced attack surface visibility, and proactive prioritised remediation for every one of our client businesses. What do we deliver? Securin helps organizations to identify and remediate the most dangerous exposures, vulnerabilities, and risks in their environment. We deliver predictive and definitive intelligence and facilitate proactive remediation to help organizations stay a step ahead of attackers. By utilising our cybersecurity solutions, our clients can have a proactive and holistic view of their security posture and protect their assets from even the most advanced and dynamic attacks. Securin has been recognized by national and international organizations for its role in accelerating innovation in offensive and proactive security. Our combination of domain expertise, cutting-edge technology, and advanced tech-enabled cybersecurity solutions has made Securin a leader in the industry. Job Location : IIT Madras Research Park, A block, Third floor, 32, Tharamani, Chennai, Tamil Nadu 600113 Work Mode: Hybrid, Work from office - Chennai, 2 days a week Compensation : Up to 30LPA Job Title: Senior Software Engineer (With Machine Learning Experience) Job Description: We are seeking a skilled and motivated Python Engineer with 5+ years of professional experience , including at least 2 years of hands-on experience in Machine Learning (ML) . The ideal candidate will possess strong Python development skills, a deep understanding of object-oriented programming (OOP), and practical experience with NoSQL databases, especially MongoDB. Familiarity with cloud platforms such as AWS, GCP, or Azure is also required. This role is perfect for a developer who is not only proficient in backend engineering but also enthusiastic about applying ML concepts in real-world applications. You’ll work closely with cross-functional teams to develop and optimize scalable, reliable, and maintainable Python-based solutions. Responsibilities : Design, develop, and maintain Python applications with a focus on performance and scalability. Design systems with non-linear time complexity and efficient space usage across compute and storage. Ensure stateless, idempotent request processing with no in-memory state. Model schemas for future evolution, supporting increasing data volume and structural changes. Build and operate cloud-based SaaS applications with a focus on production reliability. Design includes not only functional code but also integrated monitoring, alerting, and health checks to ensure observability and operational excellence in a multi-tenant environment. Apply object-oriented programming (OOP) principles to craft reusable, modular code. Develop, implement, and optimize machine learning models in production environments. Leverage NoSQL databases like MongoDB for efficient data storage and retrieval. Work with cloud platforms (AWS, GCP, Azure) for application deployment and data services. Write and maintain robust unit and integration tests using test-driven development (TDD) practices. Participate in the full software development lifecycle — from requirements gathering to deployment. Collaborate with cross-functional teams, participate in Agile ceremonies, and contribute to technical discussions. Engage in code reviews and mentor junior team members where appropriate. Stay updated with emerging trends in Python development, machine learning, and software engineering best practices. Requirements: Bachelor’s degree in Computer Science, Engineering, or a related field. 5+ years of professional experience in Python development. At least 2 years of hands-on experience with Machine Learning (model development, evaluation, and deployment). Strong understanding of OOP principles and real-world software design. Experience working with NoSQL databases, particularly MongoDB. Familiarity with TDD practices and writing unit tests. Practical experience with cloud platforms (AWS, GCP, or Azure). Proficiency with version control systems such as Git. Excellent problem-solving and debugging skills. Strong communication and teamwork abilities. A proactive, self-motivated attitude with a passion for continuous learning. Preferred Qualifications: Hands-on experience in AI concepts including LLMs, prompt engineering, or traditional AI. Strong grasp of supervised, unsupervised, and reinforcement learning with practical experience in key ML algorithms (e.g., regression, SVMs, neural networks, clustering). Proficient with ML frameworks like scikit-learn, TensorFlow, PyTorch, and XGBoost. Solid foundation in math (linear algebra, calculus, probability, statistics) and understanding of optimization and loss functions. Experience with model serving using Flask, FastAPI, or TensorFlow Serving. Experience with Python ML libraries such as scikit-learn, TensorFlow, PyTorch, or similar. Knowledge of Agile/Scrum methodologies and collaborative development workflows. What We Offer: A collaborative and innovative team environment. Opportunities to work on AI/ML-powered products and projects. Ongoing learning and career development opportunities. A dynamic culture focused on growth, curiosity, and problem-solving. If you're a Python developer with a strong foundation and a growing passion for machine learning, we’d love to hear from you! Why should we connect? We are a bunch of passionate cybersecurity professionals who are building a culture of security. Today, cybersecurity is no more a luxury but a necessity with a global market value of $150 billion. At Securin, we live by a people-first approach. We firmly believe that our employees should enjoy what they do. For our employees, we provide a hybrid work environment with competitive best-in-industry pay, while providing them with an environment to learn, thrive, and grow. Our hybrid working environment allows employees to work from the comfort of their homes or the office if they choose to. For the right candidate, this will feel like your second home. If you are passionate about cybersecurity just as we are, we would love to connect and share ideas.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Wissen Technology is Hiring fo r AI/ML Engineer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges. Role Overview: We are looking for a Senior AI/ML Engineer with expertise in Generative AI ( GenAI ) integrations, APIs, and Machine Learning (ML) algorithms who should have strong hands-on experience in Python and statistical and predictive modeling. Experience: 6 - 10 Years Location: Bengaluru Required Skills: 6 + years of experience in AI/ML, with a strong focus on GenAI integrations and APIs . Proficiency in Python , including libraries like TensorFlow, PyTorch , Scikit-learn, and Pandas. Strong expertise in statistical modeling and ML algorithms (Regression, Classification, Clustering, NLP, etc.). Hands-on experience with RESTful APIs and AI model deployment . Excellent problem-solving skills and ability to work in a fast-paced environment. The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right ’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard ’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website : www.wissen.com LinkedIn : https ://www.linkedin.com/company/wissen-technology Wissen Leadership : https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership : https://www.wissen.com/articles/ Employee Speak: https://www.ambitionbox.com/overview/wissen-technology-overview https://www.glassdoor.com/Reviews/Wissen-Infotech-Reviews-E287365.htm Great Place to Work: https://www.wissen.com/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-india/ https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-6935459546131763200-xF2k About Wissen Interview Process : https://www.wissen.com/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-whom-we-hire/ Latest in Wissen in CIO Insider: https://www.cioinsiderindia.com/vendor/wissen-technology-setting-new-benchmarks-in-technology-consulting-cid-1064.html

Posted 5 days ago

Apply

6.0 years

10 - 20 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 5 days ago

Apply

6.0 years

10 - 20 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 5 days ago

Apply

6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Wissen Technology is Hiring fo r AI/ML Engineer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges. Role Overview: We are looking for a Senior AI/ML Engineer with expertise in Generative AI ( GenAI ) integrations, APIs, and Machine Learning (ML) algorithms who should have strong hands-on experience in Python and statistical and predictive modeling. Experience: 6 - 10 Years Location: Mumbai Required Skills: 6 + years of experience in AI/ML, with a strong focus on GenAI integrations and APIs . Proficiency in Python , including libraries like TensorFlow, PyTorch , Scikit-learn, and Pandas. Strong expertise in statistical modeling and ML algorithms (Regression, Classification, Clustering, NLP, etc.). Hands-on experience with RESTful APIs and AI model deployment . Excellent problem-solving skills and ability to work in a fast-paced environment. The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right ’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard ’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website : www.wissen.com LinkedIn : https ://www.linkedin.com/company/wissen-technology Wissen Leadership : https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership : https://www.wissen.com/articles/ Employee Speak: https://www.ambitionbox.com/overview/wissen-technology-overview https://www.glassdoor.com/Reviews/Wissen-Infotech-Reviews-E287365.htm Great Place to Work: https://www.wissen.com/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-india/ https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-6935459546131763200-xF2k About Wissen Interview Process : https://www.wissen.com/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-whom-we-hire/ Latest in Wissen in CIO Insider: https://www.cioinsiderindia.com/vendor/wissen-technology-setting-new-benchmarks-in-technology-consulting-cid-1064.html

Posted 5 days ago

Apply

6.0 years

10 - 20 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 5 days ago

Apply

6.0 years

10 - 20 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 5 days ago

Apply

6.0 years

10 - 20 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 5 days ago

Apply

6.0 years

10 - 20 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 5 days ago

Apply

6.0 years

10 - 20 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 5 days ago

Apply

6.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

ROLE- SQL DBA Location- Chennai Experience - 5-7 yrs Role and Responsibilities 1. Extensive working knowledge on SQL Server and PostgreSQL 2. Configure DBMS monitoring utilities to minimize false alarms. Implement Automation using PowerShell Scripts. Good Knowledge on PowerShell Scripts is must. 3. Comprehensive implementation knowledge on SSIS, SSAS and SSRS. 4. Ensure all database servers are backed up in a way that meets the business’s Recovery Point Objectives (RPO) 5. Test backups to ensure we can meet the business’ Recovery Time Objectives (RTO) 6. Troubleshoot Database service outages as they occur, including after-hours and weekends. 7. As new systems are brought in-house, choose whether to use clustering, log shipping, mirroring, SQL Azure, Always ON. 8. Implementation Knowledge on High Availability of Oracle Data Guard, Golden Gate, Always ON & SQL Azure. 9. Install and configure SQL Server, PostgreSQL and Oracle. 10. Deploy database change scripts provided by third party vendors 11. SQL Server Migration experience from SQL older to newer versions. 12. Cross Platform Migration experience (Oracle to SQL & Oracle to DB2 & MySQL to PostgreSQL). 13. When performance issues arise, determine the most effective way to increase performance including hardware purchases, server configuration changes, or index/query changes. 14. Document the company’s database environment. 15. Ensure that new database code meets company standards for readability, reliability, and performance. 16. Each week, give developers a list of the top 10 most resource-intensive queries on the server and suggest ways to improve performance on each. 17. Design indexes for existing applications, choosing when to add or remove indexes 18. When users complain about the performance of a particular query, help developers improve performance of that query by tweaking it or modifying indexes. 19. Advise developers on the most efficient database designs (tables, datatypes, stored procedures, functions, etc.). 20. Implementation experience in Application Performance Tuning and Optimization. Following will the major responsibilities: 1. L2/L3 Microsoft SQL and PostgreSQL Administration 2. Experience in installing, configuring, designing SQL instances 3. Must demonstrate experience to support, optimize and maintain Microsoft SQL 2016/2019 infrastructure with high availability. 4. Working knowledge and hands on experience in On-prem to Azure SQL Server Migration is highly desirable. 5. PowerShell Scripting Knowledge is must. 6. Good communication. 7. Ability to interact and co-ordinate with multiple stakeholders. Some tasks below 1. Monthly patching 2. Cumulative patching. 3. Conduct SQL Server lunch-and-learn sessions for application developers. 4. Knowledge on Virtualization (VMware, Cloud) & Storage concepts Additional Comments: Over all 6-8 Years of Experience with ITIL/ITSM Service Management. Should be willing to work in night shift.

Posted 5 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

https://forms.office.com/r/JT9GG2968G Kindly fill the form. The profiles will be considered based only on the responses in the forms. Summary We are seeking a highly skilled and experienced DBA to join our expanding Information Technology team. In this role, you will help develop and design technology solutions that are scalable, relevant, and critical to our company’s success. You will join the team working on our new platform being built using MS SQL Server and MYSQL Server . You will participate in all phases of the development lifecycle, implementation, maintenance and support and must have a solid skill set, a desire to continue to grow as a Database Administrator, and a team-player mentality. Key Responsibilities 1. Primary responsibility will be the management of production databases servers, including security, deployment, maintenance and performance monitoring. 2. Setting up SQL Server replication, mirroring and high availability as would be required across hybrid environments. 3. Design and implementation of new installations, on Azure, AWS and cloud hosting with no specific DB services. 4. Deploy and maintain on premise installations of SQL Server on Linux/ MySQL installation. 5. Database security and protection against SQL injection, exploiting of intellectual property, etc., 6. To work with development teams assisting with data storage and query design/optimization where required. 7. Participate in the design and implementation of essential applications. 8. Demonstrate expertise and add valuable input throughout the development lifecycle. 9. Help design and implement scalable, lasting technology solutions. 10. Review current systems, suggesting updates as would be required. 11. Gather requirements from internal and external stakeholders. 12. Document procedures to setup and maintain a highly available SQL Server database on Azure cloud, on premise and Hybrid environments. 13. Test and debug new applications and updates 14. Resolve reported issues and reply to queries in a timely manner. 15. Remain up to date on all current best practices, trends, and industry developments. 17. Identify potential challenges and bottlenecks in order to address them proactively. Key Competencies/Skillsets SQL Server management on Hybrid environments (on premise and cloud, preferably, Azure, AWS) MySQL Backup, SQL Server Backup, Replication, Clustering, Log shipping experience on Linux/ Windows. Setting up, management and maintenance of SQL Server/ MySQL on Linux. Experience with database usage and management Experience in implementing Azure Hyperscale database Experience in Financial Services / E-Commerce / Payments industry preferred. Familiar with multi-tier, object-oriented, secure application design architecture Experience in cloud environments preferably Microsoft Azure on Database service tiers. Experience of PCI DSS a plus SQL development experience is a plus Linux experience is a plus Proficient in using issue tracking tools like Jira, etc. Proficient in using version control systems like Git, SVN etc. Strong understanding of web-based applications and technologies Sense of ownership and pride in your performance and its impact on company’s success Critical thinker and problem-solving skills Excellent communication skills and ability to communicate with client’s via different modes of communication email, phone, direct messaging, etc Preferred Education and Experience 1. Bachelor’s degree in computer science or related field 2. Minimum 3 years’ experience as SQL Server DBA and as MySQL DBA and 2 + years of experience as MySQL DBA including Replication, InnoDB Cluster, Upgrading and Patching. 3. Ubuntu Linux knowledge is perferred. 3. MCTS, MCITP, and/or MVP/ Azure DBA/MySQL certifications a plus

Posted 5 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview: Do you love building software that thrills your customers? Do you insist on the highest standards for the software your team develops? Are you a progressive software engineer, an advocate of agile development practices, and a proponent of continuous improvement? If this is you, then join and energetic team of engineers building next generation of solutions at PDI! As an engineering leader, you will lead Agile engineering resources & provide guidance from inception through release of major & point product releases, including ongoing maintenance. You will be working closely with your product managers, product owners, engineering leaders, your team and other stakeholders. You will be leading developers, quality engineers and partnering with CloudOps, TechOps, UX Design other cross functional functional groups to evolve our solutions while continuing to improve your teams’ adoption of SDLC processes, CI/CD integration, code quality & automation test coverage. Key Responsibilities: Lead an organization of 4-20 development & test engineers globally to efficiently produce high quality deliverables Manage team leads, direct reports or a mix of both Manage several deliverables for a product line on time, on scope and on quality Instrument your processes, produce scorecards of progress regularly and establish a regular cadence of operational reviews with your management including quality metrics, coding efficiencies, improvements, challenges, remediation needs Correlate, report, and drive the adoption of Process/Continuous Improvement initiatives Recruit & provide leadership, coaching & career planning for engineering talent Be accountable for design decisions for new and existing application development, proactively escalating issues and seeking assistance to overcome obstacles Partner with Product Management to consult on solution feasibility and high-level effort estimation Communicate with customers to ensure that expectations and support needs are met Provide architectural guidance to your teams towards our PDI Cloud & Platform strategy Make recommendation for technology adoption and framework improvement, analyzing trends, patterns and best practices for software Serve as the evangelist and custodian of technology, architecture, and product development practices Participate in the design & implementation of production cloud grade services supporting high availability Actively talent manage your team providing career planning & performance improvement activities when needed Qualifications: 5+ years of experience leading software engineers for product development Experience managing capitalized software processes Preferred: experience with managing teams' operational health by analyzing product teams' work distribution Capex, OpenX, Maintenance, Billable and OH Preferred: experience managing the organizational structure of teams as well as headcount & non-headcount budgets 10+ years of combined experience in software engineering, enterprise architecture and/or DevOps Working experience with scaled software architecture & domain: performance, redundancy, failover, clustering, vertical scaling  Working experience with source code management patterns and DevOps automation Proficient in API design, development & production operation Working experience with at least one mainstream operating system and IP networking Working experience managing production client & server code bases across one or more technology stacks Working experience with production SQL schema design, queries & administration in one or more mainstream relational and/or no-SQL databases Preferred: working experience with orchestration, automation, and configuration management processes & related DevOps tools & cloud platforms Preferred: working experience with event-based systems, streaming architecture & related technologies Highly motivated self-starter with a desire to help others and take action Requires strong written and verbal communication skills with the ability to translate technical concepts into non-technical terms Ability to independently work as a contributing member in a high-paced and focused team Ability to multi-task and prioritize tasks with competing deadlines Strong problem-solving and analytical skills with the ability to work under pressure Ability to socialize ideas and influence decisions without direct authority Collaborative in nature with a strong desire to dig in and learn independently and as well as through asking questions Considers ‘best-practice’ standards, as well as departmental policies and procedures Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview: Do you love building software that thrills your customers? Do you insist on the highest standards for the software your team develops? Are you a progressive software engineer, an advocate of agile development practices, and a proponent of continuous improvement? If this is you, then join and energetic team of engineers building next generation of solutions at PDI! As an engineering leader, you will lead Agile engineering resources & provide guidance from inception through release of major & point product releases, including ongoing maintenance. You will be working closely with your product managers, product owners, engineering leaders, your team and other stakeholders. You will be leading developers, quality engineers and partnering with CloudOps, TechOps, UX Design other cross functional functional groups to evolve our solutions while continuing to improve your teams’ adoption of SDLC processes, CI/CD integration, code quality & automation test coverage. Key Responsibilities: Lead an organization of 4-20 development & test engineers globally to efficiently produce high quality deliverables Manage team leads, direct reports or a mix of both Manage several deliverables for a product line on time, on scope and on quality Instrument your processes, produce scorecards of progress regularly and establish a regular cadence of operational reviews with your management including quality metrics, coding efficiencies, improvements, challenges, remediation needs Correlate, report, and drive the adoption of Process/Continuous Improvement initiatives Recruit & provide leadership, coaching & career planning for engineering talent Be accountable for design decisions for new and existing application development, proactively escalating issues and seeking assistance to overcome obstacles Partner with Product Management to consult on solution feasibility and high-level effort estimation Communicate with customers to ensure that expectations and support needs are met Provide architectural guidance to your teams towards our PDI Cloud & Platform strategy Make recommendation for technology adoption and framework improvement, analyzing trends, patterns and best practices for software Serve as the evangelist and custodian of technology, architecture, and product development practices Participate in the design & implementation of production cloud grade services supporting high availability Actively talent manage your team providing career planning & performance improvement activities when needed Qualifications: 5+ years of experience leading software engineers for product development Experience managing capitalized software processes Preferred: experience with managing teams' operational health by analyzing product teams' work distribution Capex, OpenX, Maintenance, Billable and OH Preferred: experience managing the organizational structure of teams as well as headcount & non-headcount budgets 10+ years of combined experience in software engineering, enterprise architecture and/or DevOps Working experience with scaled software architecture & domain: performance, redundancy, failover, clustering, vertical scaling  Working experience with source code management patterns and DevOps automation Proficient in API design, development & production operation Working experience with at least one mainstream operating system and IP networking Working experience managing production client & server code bases across one or more technology stacks Working experience with production SQL schema design, queries & administration in one or more mainstream relational and/or no-SQL databases Preferred: working experience with orchestration, automation, and configuration management processes & related DevOps tools & cloud platforms Preferred: working experience with event-based systems, streaming architecture & related technologies Highly motivated self-starter with a desire to help others and take action Requires strong written and verbal communication skills with the ability to translate technical concepts into non-technical terms Ability to independently work as a contributing member in a high-paced and focused team Ability to multi-task and prioritize tasks with competing deadlines Strong problem-solving and analytical skills with the ability to work under pressure Ability to socialize ideas and influence decisions without direct authority Collaborative in nature with a strong desire to dig in and learn independently and as well as through asking questions Considers ‘best-practice’ standards, as well as departmental policies and procedures Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview: Do you love building software that thrills your customers? Do you insist on the highest standards for the software your team develops? Are you a progressive software engineer, an advocate of agile development practices, and a proponent of continuous improvement? If this is you, then join and energetic team of engineers building next generation of solutions at PDI! As an engineering leader, you will lead Agile engineering resources & provide guidance from inception through release of major & point product releases, including ongoing maintenance. You will be working closely with your product managers, product owners, engineering leaders, your team and other stakeholders. You will be leading developers, quality engineers and partnering with CloudOps, TechOps, UX Design other cross functional functional groups to evolve our solutions while continuing to improve your teams’ adoption of SDLC processes, CI/CD integration, code quality & automation test coverage. Key Responsibilities: Lead an organization of 4-20 development & test engineers globally to efficiently produce high quality deliverables Manage team leads, direct reports or a mix of both Manage several deliverables for a product line on time, on scope and on quality Instrument your processes, produce scorecards of progress regularly and establish a regular cadence of operational reviews with your management including quality metrics, coding efficiencies, improvements, challenges, remediation needs Correlate, report, and drive the adoption of Process/Continuous Improvement initiatives Recruit & provide leadership, coaching & career planning for engineering talent Be accountable for design decisions for new and existing application development, proactively escalating issues and seeking assistance to overcome obstacles Partner with Product Management to consult on solution feasibility and high-level effort estimation Communicate with customers to ensure that expectations and support needs are met Provide architectural guidance to your teams towards our PDI Cloud & Platform strategy Make recommendation for technology adoption and framework improvement, analyzing trends, patterns and best practices for software Serve as the evangelist and custodian of technology, architecture, and product development practices Participate in the design & implementation of production cloud grade services supporting high availability Actively talent manage your team providing career planning & performance improvement activities when needed Qualifications: 5+ years of experience leading software engineers for product development Experience managing capitalized software processes Preferred: experience with managing teams' operational health by analyzing product teams' work distribution Capex, OpenX, Maintenance, Billable and OH Preferred: experience managing the organizational structure of teams as well as headcount & non-headcount budgets 10+ years of combined experience in software engineering, enterprise architecture and/or DevOps Working experience with scaled software architecture & domain: performance, redundancy, failover, clustering, vertical scaling  Working experience with source code management patterns and DevOps automation Proficient in API design, development & production operation Working experience with at least one mainstream operating system and IP networking Working experience managing production client & server code bases across one or more technology stacks Working experience with production SQL schema design, queries & administration in one or more mainstream relational and/or no-SQL databases Preferred: working experience with orchestration, automation, and configuration management processes & related DevOps tools & cloud platforms Preferred: working experience with event-based systems, streaming architecture & related technologies Highly motivated self-starter with a desire to help others and take action Requires strong written and verbal communication skills with the ability to translate technical concepts into non-technical terms Ability to independently work as a contributing member in a high-paced and focused team Ability to multi-task and prioritize tasks with competing deadlines Strong problem-solving and analytical skills with the ability to work under pressure Ability to socialize ideas and influence decisions without direct authority Collaborative in nature with a strong desire to dig in and learn independently and as well as through asking questions Considers ‘best-practice’ standards, as well as departmental policies and procedures Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 5 days ago

Apply

0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Senior analyst with ML model experience for Regions Bank Job Description: Primary Responsibilities for a Risk Data Scientist on the BSA/AML/OFAC Model Development and Monitoring Team: Design and develop transaction monitoring scenarios. Improves segmentation for the scenarios/models using techniques such as clustering. Executes periodic tuning for the threshold parameters of the scenarios through sample collection. Develops post processing models to reduce the number of false positives generated by the scenarios using techniques such as rare event logistic regression and machine learning algorithms. Researches and develops algorithms that incorporates fuzzy logic for OFAC sanction screening and other types of screening processes. Develops post processing models that incorporate Natural Language Processing techniques to reduce the number of false positives. Implementation of models. Develops and executes ongoing monitoring plan to monitor the performance of the models. Documentation of the model development, especially the implementation process. Supports model validation activities. Ad hoc analysis to address requests from business partners. Requirements Bachelor’s degree in Statistics, Data Science, Operations Research, Industrial Engineering, Mathematics, or Physics, AND six (6) years related experience. Master’s degree in Statistics, Data Science, Operations Research, Industrial Engineering, Mathematics, or Physics, AND four (4) years related experience. OR PhD degree in Statistics, Operations Research, Industrial Engineering, Mathematics, or Physics, AND two (2) years related experience. Skills and Competencies High proficiency with Python, R, SAS, and SQL. Advanced data sourcing and management skills. Enhanced experience with web scraping. Knowledge and experience with CDSW platform. Experience with software development, especially deployment. Experience with classification models. Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 6 days ago

Apply

3.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Data Scientist II-3 Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview Finicity, a Mastercard company, is leading the Open Banking Initiative to increase the Financial Health of consumers and businesses. The Data Science and Analytics team is looking for a Data Scientist II. The Data Science team works on Intelligent Decisioning; Financial Certainty; Attribute, Feature, and Entity Resolution; Verification Solutions and much more. Join our team to make an impact across all sectors of the economy by consistently innovating and problem-solving. The ideal candidate is passionate about leveraging data to provide high quality customer solutions. Also, the candidate is a strong technical leader who is extremely motivated, intellectually curious, analytical, and possesses an entrepreneurial mindset. Role Manipulates large data sets and applies various technical and statistical analytical techniques (e.g., OLS, multinomial logistic regression, LDA, clustering, segmentation) to draw insights from large datasets. Apply various Machine learning (i.e. SVM, Radom Forest, XGBoost, LightGBM, CATBoost etc), Deep learning techniques (i.e. LSTM, RNN, Transformer etc.) to solve analytical problem statement. Design and implement machine learning models for a number of financial applications including but not limited to: Transaction Classification, Temporal Analysis, Risk modeling from structured and unstructured data. Measure, validate, implement, monitor and improve performance of both internal and external facing machine learning models. Propose creative solutions to existing challenges that are new to the company, the financial industry and to data science. Present technical problems and findings to business leaders internally and to clients succinctly and clearly. Leverage best practices in machine learning and data engineering to develop scalable solutions. Identify areas where resources fall short of needs and provide thoughtful and sustainable solutions to benefit the team Be a strong, confident, and excellent writer and speaker, able to communicate your analysis, vision and roadmap effectively to a wide variety of stakeholders All About You: 3-5 years in data science/ machine learning model development and deployments Exposure to financial transactional structured and unstructured data, transaction classification, risk evaluation and credit risk modeling is a plus. A strong understanding of NLP, Statistical Modeling, Visualization and advanced Data Science techniques/methods. Gain insights from text, including non-language tokens and use the thought process of annotations in text analysis. Solve problems that are new to the company, the financial industry and to data science SQL / Database experience is preferred Experience with Kubernetes, Containers, Docker, REST APIs, Event Streams or other delivery mechanisms. Familiarity with relevant technologies (e.g. Tensorflow, Python, Sklearn, Pandas, etc.). Strong desire to collaborate and ability to come up with creative solutions. Additional Finance and FinTech experience preferred. Bachelor’s or Master’s Degree in Computer Science, Information Technology, Engineering, Mathematics, Statistics. Corporate Security Responsibility Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security. All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And Therefore, It Is Expected That The Successful Candidate For This Position Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. #AI Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-247687

Posted 6 days ago

Apply

0 years

2 - 3 Lacs

Hyderābād

On-site

GlassDoor logo

Overview: As Sales Sr. Mgr., ensure that exceptional leadership & operational direction is provided by his/her analysts team to sales employees across multiple teams and markets. His/her Planogram Analysts deliver visually appealing planograms based on store clustering, space definitions and defined flow. Work closely with Category Management and Space teams to ensure planograms meet approved parameters. Conduct planogram quality audit ensuring all planograms meet assortment requirements, visual appeal, innovation opportunities and shelving metrics. Continuously identify opportunities and implement processes to improve quality, timeliness of output and process efficiency through automation. Responsibilities: Head the DX Sector Planogram Analyst team and ensure efficient, effective and comprehensive support of the sales employees across multiple teams and markets Lead and manage the Planogram Analysts work stream by working closely with Sector Space & Planogram team Ensure accurate and timely delivery of tasks regarding: deliver visually appealing versioned planograms based on store clustering, space definitions and defined flow work closely with Category Management and Space teams to ensure planogram meet approved parameters conduct planogram quality control ensuring all planograms meet assortment requirements, visual appeal, innovation opportunities and shelving metrics electronically deliver planograms to both internal teams and external customer specific systems manage multiple project timelines simultaneously ensure timelines are met by tracking project process, coordinating activities and resolving issues build and maintain relationships with internal project partners manage planogram version/store combination and/or store planogram assignments and to provide reporting and data as needed maintain planogram database with most updated planogram files retain planogram models and files for historical reference, as needed Invest and drive adoption of industry best practices across regions/sector, as required Partner with global teams to define strategy for End to End execution ownership and accountability. Lead workload forecasting and effectively drive prioritization conversation to support capacity management. Build stronger business context and elevate the team’s capability from execution focused to end to end capability focused. Ensure delivery of accurate and timely planograms in accordance with agreed service level agreements (SLA) Work across multiple functions to aid in collecting insights for action-oriented cause of change analysis Ability to focus against speed of execution and quality of service delivery rather than achievement of SLAs Recognize opportunities and take action to improve delivery of work Implement continued improvements and simplifications of processes and optimal use of technology Scale-up operation in-line with business growth, both within existing scope, as well as new areas of opportunity Create an inclusive and collaborative environment People Leadership Enable direct report’s capabilities and enforce consistency in execution of key capability areas; planogram QC, development and timely delivery Responsible for Hiring, talent assessment, competency development, performance management, productivity improvement, talent retention, career planning and development Provide and receive feedback about the global team and support effective partnership. Qualifications: 10+ yrs. of retail/merchandizing experience (including JDA) 2+ yrs. of people leadership experience in a Space planning/planogram environment Bachelors in commerce/business administration/marketing, Master’s degree is a plus Advanced level skill in Microsoft Office, with demonstrated intermediate-advanced Excel skills necessary Experience with analyzing and reporting data to identify issues, trends, or exceptions to drive improvement of results and find solutions Advanced knowledge and experience of space management technology platform JDA Propensity to learn PepsiCo software systems Ability to provide superior customer service Best-in-class time management skills, ability to multitask, set priorities and plan

Posted 6 days ago

Apply

4.0 - 7.0 years

2 - 3 Lacs

Hyderābād

On-site

GlassDoor logo

Overview: As Planogram Analyst deliver visually appealing versioned planograms based on store clustering, space definitions and defined flow. Work closely with Category Management and Space teams to ensure planograms meet approved parameters. Conduct planogram quality control ensuring all planograms meet assortment requirements, visual appeal, innovation opportunities and shelving metrics. Continuously identify opportunities and implement processes to improve quality and timeliness of output. Responsibilities: Be a single point of contact for category/region by mastering Process and Category knowledge. Partner with Category Manager / KAM’s to building business context and creating effortless partnership Acquire Project management skills to lead multiple projects seamlessly and ensuring timely delivery of projects. Knowledge Sharing: Gain in-depth knowledge of PepsiCo business, categories, products, tools and share new learnings with the team on a continual basis. Ensure accurate and timely delivery of Projects regarding: Deliver visually appealing versioned planograms based on store clustering, space definitions and defined flow Conduct planogram quality control ensuring all planograms meet assortment requirements, visual appeal, innovation opportunities and shelving metrics Ensure timelines are met by tracking project process, coordinating activities, and resolving issues Leverage data to allocate right space for right product. Avoid redundancy in reporting and call out best practices to the team Display a high sense of accountability when completing requests with high visibility or tight turnaround times. Scale-up growth by identifying areas where CI is required, both within existing scope, as well as new areas of opportunity. Create an inclusive and collaborative environment Work in a team environment with focus on achieving team goals vs individual goals Actively learn and apply advanced level of expertise in JDA, Intermediate - MS Excel, all other relevant applications. Work alongside of peers and inculcate best practices and elevate the team's ability to tackle business questions with value adds. Qualifications: 4 - 7 years of experience in Space Planning – JDA, Retail or FMCG Experience. Bachelor’s degree. Intermediate level skill in Microsoft Office, with demonstrated intermediate Excel skills necessary Ability to solve problems. Advanced knowledge and experience of space management technology platform JDA Ability to work collaboratively and proactively with multi-functional teams / Stake holders. Best-in-class time management and protization skills. Excellent written and oral communication skills; proactively communicates using appropriate methods for situation and audience in clear, concise and professional manner Strong at data analysis with strong attention to detail Basic project management skills

Posted 6 days ago

Apply

0 years

4 - 10 Lacs

Pune

On-site

GlassDoor logo

Infra and DevOps Engineer, AS Job ID: R0391182 Full/Part-Time: Full-time Regular/Temporary: Regular Listed: 2025-06-20 Location: Pune Position Overview Job Title: Infra and DevOps Engineer Location: Pune, India Corporate Title: AS Role Description The Infra & DevOps team within DWS India , sits horizontally over the project delivery, committed to provide best in class shared services across build, release and QA Automation space. Its’ main functional areas encompass Environment build, I ntegration of QA Automation suite, Release and Deployment Automation Management, Technology Management and Compliance Management. This role will be key to our programme delivery and include working closely with stakeholders including client Product Owners, Digital Design Organisation, Business Analysts, Developers and QA to advise and contribute from Infra and DevOps capability perspective by Building and maintaining non-prod and prod environments, setting up end to end alerting and monitoring for ease of operation and oversee transition of the project to L2 support teams as part of Go Live. What we’ll offer you As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Drives automation (incl. automated build, test and deploy) Supports and manages Data / Digital systems architecture (underlying platforms, APIs, UI, Datasets …) in line with the architectural vision set by the Digital Design Organisation across environments. Drives integration across systems, working to ensure service layer integrates with the core technology stack whilst ensuring that services integrate to form a service ecosystem Monitors digital architecture to ensure health and identify required corrective action Serve as a technical authority, working with developers to drive architectural standards on the specific platforms that are developing upon Build security into the overall architecture, ensuring adherence to security principles set within IT and adherence to any required industry standards Liaises with IaaS and PaaS service provides within the Bank to enhance their offerings. Liaises with other technical areas, conducting technology research, and evaluating software required for maintaining the development environment. Works with the wider QA function within the business to drive Continuous Testing by integrating QA Automation suites with available toolsets. Your skills and experience Proven technical hands on in Linux/Unix is a must have. Proven experience on Infrastructure Architecture - Clustering, High Availability, Performance and Tuning, Backup and Recovery. Hands-on experience with DevOps build and deploy tools like TeamCity, GIT / Bit Bucket / Artifactory and knowledge about automation/ configuration management using tools such as Ansible or relevant. A working understanding of code and scripting language such as (Python, Perl, Ruby or JS). In depth Knowledge and experience in Docker technology, OpenShift and Kubernetes containerisation Ability to deploy complex solutions based on IaaS, PaaS and public and private cloud-based infrastructures. Basic understanding of networking and firewalls. Knowledge of best practices and IT operations in an agile environment Ability to deliver independently: confidently able to translate requirements into technical solutions with minimal supervision Collaborative by nature: able to work with scrum teams, technical teams, the wider business, and IT&S to provide platform-related knowledge Flexible: finds a way to say yes and to make things happen, only exercising authority as needed to prevent the architecture from breaking Coding and scripting: Able to develop in multiple languages in order to mobilise, configure and maintain digital platforms and architecture. Automation and tooling: strong knowledge of the automation landscape, with ability to rapidly identify and mobilise appropriate tools to support testing, deployment etc Security: understands security requirements and can independently drive compliance Education / Certification Any relevant DevOps certification. Bachelor’s degree from an accredited college or university with a concentration in Science, Engineering, or an IT-related discipline (or equivalent). How we’ll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 6 days ago

Apply

2.0 years

6 - 10 Lacs

Bengaluru

On-site

GlassDoor logo

Job Description About Us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Target in India operates as a fully integrated part of Target’s global team and has more than 4,000 team members supporting the company’s global strategy and operations. Tech Overview: Every time a guest enters a Target store or browses Target.com, they experience the impact of Target’s investments in technology and innovation. We’re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 4,000 of engineers, data scientists, architects, coaches and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests—and we do so with a focus on diversity and inclusion, experimentation and continuous learning. Pyramid Overview: Target.com and Mobile translates the in-store experience our guests love to the digital environment. Our Mobile Engineers develop native apps like Cartwheel and Target’s flagship app, which are high-impact and high-visibility assets that are game-changers for literally millions of guests. Here, you’ll get to explore emerging retail and mobile technologies, playing a key role in revolutionary product launches with tech giants like Apple and Google. You’ll be a visionary for the future of Target’s app ecosystem. You’ll have the advantage of Target’s unmatched brand recognition and special marketplace foothold—making us the partner of choice for innovative technologies like indoor mapping, iBeacons and Apple Pay. You’ll help Target evolve by using the latest open source tools and technologies and staying true to strong agile practices. You’ll lend your passion for engineering technologies that fix problems and meet needs guests didn’t even know they had. You’ll work on autonomous teams and incorporate the newest technical practices. You’ll have the chance to perform by writing rock-solid code that stands up to our massive scale. Plus, and perhaps best of all, you’ll have the right balance of self-rule and accountability for how technical products perform. Team Overview: We are dedicated to ensuring a seamless and efficient checkout experience for Guests shopping on our digital channels, including web and mobile apps. Our team plays a crucial role in the overall shopping journey, focusing on the final and most critical steps of the purchase process. We are responsible for managing the seamless payments experience during Checkout , from the moment a Guest adds a payment to their cart to the final purchase confirmation. Our goal is to provide a smooth, secure, and user-friendly checkout process that enhances customer satisfaction and drives conversions. Our team is cross-geo located, with members driving different features and collaborating from both India and the US. This diverse setup allows us to leverage a wide range of expertise and perspectives, fostering innovative solutions and effective problem-solving. As part of the Digital Payments team , you will have the opportunity to work with cutting-edge technologies and innovative solutions to continuously improve the Checkout experience. Our collaborative and dynamic environment encourages creative problem-solving and the sharing of ideas to meet the evolving needs of our Guests. Position Overview: Able to implement new features/fixes within the current framework with little or no direction. Able to troubleshoot problems and devise solutions for root cause. Hands-on development, often taking on the more complicated tasks. Ensures solution is production ready, deployable, scalable and resilient. Has advanced skills around technology for their area. Examples may include: computing topics, threading models, performance considerations, caching, database indexing, operating system internals, networking, infrastructure systems and operations. Researches the best design and new technologies for given problem. Evaluates technologies and documents decision making. Understands how the solution is deployed, examples may include: VMs, containers, clustering, load balancing, DNS, networking, and scalability. Recommends changes to internal processes and procedures when deficiencies are observed. Articulates the value of a technology. Approaches all engineering work with a security lens and actively looks for security vulnerabilities within code/infrastructure architecture when providing peer reviews. Contributes to open source where applicable. Helps tune and change the observability on their team accordingly. Is aware of the operational data for their team’s domain and uses it as a basis for suggesting stability and performance improvements. About You: Experience: 2 years - 4 years 4 year degree or equivalent experience Excellent communication skills with both business partners and other engineering teams Familiar with Agile principles and possess a team attitude Strong problem solving and debugging skills Strong sense of ownership and the ability to work with a limited set of requirements Experience engineering applications for the JVM. Java or Kotlin experience is definitely needed. Experience in micro services, Spring Boot, and event driven architecture Experience building CI/CD pipelines Exposure to building high-performance scalable APIs is a plus. Knowledge of NoSQL technologies Cassandra, Elastic search, MongoDB is a plus Good at writing unit and functional tests and test-driven development Know More About Us Here: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 6 days ago

Apply

2.0 - 4.0 years

7 - 9 Lacs

Chennai

On-site

GlassDoor logo

Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for working cross-functionally to collect data and develop models to determine trends utilizing a variety of data sources. Retrieves, analyzes and summarizes business, operations, employee, customer and/or economic data in order to develop business intelligence, optimize effectiveness, predict business outcomes and decision-making purposes. Involved with numerous key business decisions by conducting the analyses that inform our business strategy. This may include: impact measurement of new products or features via normalization techniques, optimization of business processes through robust A/B testing, clustering or segmentation of customers to identify opportunities of differentiated treatment, deep dive analyses to understand drivers of key business trends, identification of customer sentiment drivers through natural language processing (NLP) of verbatim responses to Net Promotor System (NPS) surveys and development of frameworks to drive upsell strategy for existing customers by balancing business priorities with customer activity. Works with moderate guidance in own area of knowledge. Job Description 1. 2–4 years of professional experience in software or data engineering roles. 2. Hands-on experience with Power BI, Power BI Desktop, Power Apps, and Power Automate. 3. Proficiency with Tableau and SharePoint. 4. Familiarity with Amazon Redshift and SAP integration and data extraction. 5. Strong analytical, troubleshooting, and communication skills. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years

Posted 6 days ago

Apply

0 years

5 - 8 Lacs

Noida

On-site

GlassDoor logo

Senior Gen AI Engineer Job Description Brightly Software is seeking an experienced candidate to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Lead the evaluation and selection of foundation models and vector databases based on performance and business needs Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Guide the design of multi-step RAG, agentic, or tool-augmented workflows Implement governance, safety layers, and responsible AI practices (e.g., guardrails, moderation, auditability) Mentor junior engineers and review GenAI design and implementation plans Drive experimentation, benchmarking, and continuous improvement of GenAI capabilities Collaborate with leadership to align GenAI initiatives with product and business strategy Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS Opensearch Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate techn

Posted 6 days ago

Apply

2.0 years

5 - 8 Lacs

Noida

On-site

GlassDoor logo

Gen AI Engineer Job Description Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS OpenSearch Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate technical concepts clearly to cross-functional and non-technical stakeholders Thrive in a fast-paced, lean environment and contribute to scalable GenAI system design Qualifications Bachelor’s degree is required 2-4 years of experience of total experience with a strong focus on AI and ML and 1+ years in core GenAI Engineering Demonstrated expertise in working with large language models (LLMs) and generative AI systems, including both text-based and multimodal models. Strong programming skills in Python, including proficiency with data science libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, and/or PyTorch. Familiarity with MLOps principles and tools for automating and streamlining the ML lifecycle. Experience working with agentic AI. Capable of building Retrieval-Augmented Generation (RAG) pipelines leveraging vector stores like Pinecone, Chroma, or FAISS. Strong programming skills in Python, with experience using leading AI/ML libraries such as Hugging Face Transformers and LangChain. Practical experience in working with vector databases and embedding methodologies for efficient information retrieval. Possess experience in developing and exposing API endpoints for accessing AI model capabilities using frameworks like FastAPI. Knowledgeable in prompt engineering techniques, including prompt chaining and performance evaluation strategies. Solid grasp of natural language processing (NLP) fundamentals and transformer-based model architectures. Experience in deploying machine learning models to cloud platforms (preferably AWS) and containerized environments using Docker or Kubernetes. Skilled in fine-tuning and assessing open-source models using methods such as LoRA, PEFT, and supervised training. Strong communication skills with the ability to convey complex technical concepts to non-technical stakeholders. Able to operate successfully in a lean, fast-paced organization, and to create a vision and organization that can scale quickly Senior Gen AI Engineer

Posted 6 days ago

Apply

0.0 years

3 - 7 Lacs

Indore

On-site

GlassDoor logo

Role Overview: As a Machine Learning Engineer, you will be responsible for designing, developing, and deploying machine learning models that drive [specific application, e.g., predictive analytics, natural language processing, computer vision, recommendation systems, etc.]. You will collaborate closely with data scientists, software engineers, and product teams to build scalable, high-performance AI solutions. You should have a strong foundation in machine learning algorithms, data processing, and software development, along with the ability to take ownership of the full machine learning lifecycle— from data collection and model training to deployment and monitoring. Key Responsibilities: Model Development: Design and implement machine learning models for various applications, such as [insert specific use cases, e.g., predictive analytics, classification, clustering, anomaly detection, etc.]. Data Preparation & Processing: Work with large datasets, including preprocessing, feature engineering, and data augmentation to ensure high-quality input for model training. Model Training & Tuning: Train, optimize, and fine-tune models using modern machine learning frameworks and algorithms. Monitor performance metrics and adjust parameters for model improvement. Model Deployment: Deploy machine learning models into production environments using tools like Docker, Kubernetes, or cloud platforms such as AWS, Azure, or Google Cloud. Performance Monitoring & Optimization: Continuously monitor deployed models for performance, accuracy, and scalability. Implement model retraining pipelines and maintain model health. Collaboration: Work cross-functionally with data scientists, software engineers, product managers, and other stakeholders to integrate machine learning solutions into business workflows. Research & Innovation: Stay up-to-date with the latest trends and advancements in machine learning and AI to drive innovation within the company. Qualifications: Education: Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field. A PhD is a plus but not required. Experience: 0-1years of experience in machine learning, data science, or a related technical role. Proven experience building and deploying machine learning models in a production environment. Experience with cloud platforms (e.g., AWS, GCP, Azure) for model deployment and scalability. Technical Skills: Proficiency in Python (preferred) or other programming languages (e.g., R, Java, C++). Strong knowledge of machine learning frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, Keras, or similar. Familiarity with deep learning techniques (e.g., CNNs, RNNs, transformers) is highly desirable. Experience with data processing tools such as Pandas, NumPy, and SQL. Knowledge of version control (e.g., Git), containerization (e.g., Docker), and CI/CD pipelines. Experience with big data tools and frameworks (e.g., Hadoop, Spark) is a plus. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently and manage multiple priorities in a fast-paced environment. Detail-oriented and organized, with a passion for learning and innovation. Job Type: Full-time Pay: ₹5,000.00 - ₹30,000.00 per month Schedule: Day shift Work Location: In person

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies